Samsung has announced the first demonstration of memory computing based on MRAM (Magnetoresistive Random Access Memory).
In traditional computing architecture, data is stored on DRAM memory chips and executed on processors (CPUs). However, In-memory computing is a new computing paradigm that aims to perform both storage and processing of data in memory.
Because a large amount of data stored within the memory network itself can be processed and data processing runs in a highly parallel fashion, power consumption is substantially reduced.
In-memory computing has therefore emerged as one of the Promising technologies the next generation of low-power AI semiconductor chips.
In-memory computing resembles our brain in that, in it, computation also occurs within the network of biological memories, or synapses, the points where neurons touch each other.
In fact, although at the moment the computation performed by our MRAM network has a different purpose than the computation performed by the brain, this solid-state memory network may be used in the future as a platform to mimic the brain by modeling the connectivity of synapses. of the brain.
Non-volatile memories, particularly Resistive Random Access Memory (RRAM) and Phase-change Random Access Memory (PRAM), have been actively used to demonstrate this type of computing.
However, so far it has been difficult to use the MRAM — another type of non-volatile memory — despite advantages such as speed of operation, endurance, and large-scale production. This difficulty is due to the low resistance of MRAM, so it cannot enjoy the benefit of reduced power when used in standard in-memory computing architecture.
Samsung researchers have provided a solution to this problem by developing an MRAM array chip that demonstrates in-memory computing.
The solution has been tested running AI applications and delivered 98% accuracy in classifying handwritten digits and 93% in detecting faces in images. Samsung claims that MRAM technology can be used for AI processing and to make highly energy-efficient AI chips.