Memory-based computing architecture is expected to promote the development of new AI accelerators, which are 10,000 times faster than GPUs

[Memory-based computing architecture is expected to promote the development of new AI accelerators, which are 10,000 times faster than GPUs.] The industry began to re-examine the processor architecture developed a decade ago, and is optimistic about the so-called “memory-based computing” that is 10,000 times faster than GPUs. "In-Memory Computing (IMC)" will help the development of a new generation of AI accelerators.

Startups, corporate giants and academia are starting to re-examine the processor architecture that was developed a decade ago, and it may well be that it is just the ideal choice for machine learning. They believe that the "In-Memory Computing" (IMC) architecture is expected to drive the development of new types of artificial intelligence (AI) accelerators, making them 10,000 times faster than current GPUs.

These processors promise to expand the performance of the chip as the CMOS shrink speed slows, and deep learning algorithms that require dense multiplication accumulation arrays are gaining momentum. Although these chips are still more than a year away from commercialization, they may also be the engines that drive the growth of emerging non-volatile memory.

For example, startup Mythic is aiming at performing neural network computing tasks within a flash array and is dedicated to reducing power consumption from the analog domain. The company's goal is to mass-produce chips at the end of 2019 and become one of the first companies to introduce this new chip.

Suman Datta, head of the Department of Electrical Engineering at Notre Dame University in the United States, said: "In most of our academics, emerging memory will become one of the technologies that implements processor-in-memory (PIM) technology. Non-volatile memory will mean creating new usage patterns, and the memory-based computing architecture will be one of the keys."

Datta pointed out that in the 1990s, several scholars tried to build such a processor. Designs such as EXECUBE, IRAM, and FlexRAM have all "failed. Nowadays, emerging memory such as phase change memory (PCM), resistive RAM (RRAM), and STTMRAM, and the industry's interest in machine learning hardware accelerators are strong. , began to revitalize the field of research. However, as far as I know, most of the display is still at the component or component array level, rather than a complete accelerator."

One of the competitors came from IBM's "Resistive Processing Unit" (RPU), which was first disclosed in 2016. This is a 4,096x4,096 cross array analog element.

According to IBM researcher Vijay Narayanan, "The challenge is to find out what the correct analog memory element is - we are evaluating phase change, RRAM and ferroelectricity." Vijay Narayanan is also a materials scientist and his main research area is in high-k metals. Gate.

In 2015, Stanford University also published research in this field. Chinese and Korean researchers are also pursuing this concept.

To succeed, researchers need to find materials that are compatible with the memory components of CMOS fabs. In addition, Narayanan said that “the real challenge” is that it must exhibit symmetrical conductance or resistance when voltage is applied.

20180502_IMC_NT01P1 Materials researcher Vijay Narayanan of IBM Research said that most of the memory processors used in AI are still in the research phase and are about three to five years away from market availability (Source: IBM)

Some thoughts about future transistors

IBM has produced discrete components and arrays so far, but it is not a complete test chip with a 4K x 4K array, nor has it adopted the ideal materials currently considered. Narayanan said that IBM's Geoff Burr used phase change materials for deep neural network (DNN) training on the 500x661 array, and the results showed "reasonable accuracy and acceleration."

"We are making steady progress, but we must also understand that we must improve existing materials and we must also evaluate new materials."

IBM hopes to use analog components in order to be able to define multiple conductance states, which in turn helps to open the door for low-power operation over digital components. The company also likes the fact that large arrays are expected to be a great opportunity to perform multiple AI operations in parallel.

Narayanan is optimistic that IBM can use its years of accumulated experience in high-k metal gates to find materials to adjust AI accelerator resistance. It took him more than a decade to transfer IBM's expertise in this area from research to commercial products, and collaborated with industry partners such as Globalfoundries and Samsung.

Looking to the future, IBM will focus on the development of gate-all-loop (GAA) transistors that will be used in applications beyond the 7nm node. He believes that there is no fundamental obstacle in this type of design, but only an implementation problem.

In addition to nanochips, researchers are exploring negative-capacitance field-effect transistors (FETs) that can provide large current changes with little change in voltage. Since the researchers discovered that this doped bismuth oxide is a ferroelectric material, and may be compatible with CMOS, this idea has received increasing attention in the past five years.

But Narayanan also said, "There are still many opponents and people who support both."

"Our research shows that negative capacitance is a temporary effect," said Datta of Notre Dame. "So, when the polarization switch is switched, the channel charge can be activated temporarily, and once the transient is stable, it will not achieve any results." ."

Researchers at the University of California, Berkeley (UC Berkeley) believe that this is an important “new state.” Therefore, the story continues to evolve and it can be said that most companies are conducting internal assessments. ”

AC Axial Fan

Ac Axial Fan,Ac Fan,Ac Centrifugal Fans,Ac Axial Fans

Original Electronics Technology (Suzhou) Co., Ltd. , https://www.original-te.com

Posted on