New multi-physics AI architecture boosts computing speed, efficiency
Chinese researchers from Peking University have developed a new multi-physics AI computing architecture that delivers a near fourfold speed increase while reducing power consumption. This breakthrough enables different computing paradigms to operate within their optimal physical domains, such as electrical current, charge, or light, thereby improving computational efficiency. The system boosts Fourier Transform processing speeds from about 130 billion operations per second to roughly 500 billion, marking a several-fold increase. The new architecture could make future hardware more efficient and speed up its use in areas such as foundational AI models, embodied intelligence, autonomous driving, brain-computer interfaces, and communication systems.
1 Comment