Memory-centric computing
Big data applications such as machine learning and artificial intelligence require access to a large data set to train the algorithm or neural network. This creates the need to move all the data required by the application to the processor. Moving large amounts of data across a network takes time, money, and immobilizes network bandwidth. This is not an optimal or sustainable approach.
The root cause lies in von Neumann's current computing architecture, followed for more than 30 years. In this model, the rest of the computing system revolves around the processor (referred to as the "general computing architecture"). One proposed solution is to make memory the focal point of computer architecture, memory-centric computer architecture. However, implementation is complex as all current operating systems and applications are built around the von Neumann computer. Multiple processors, which use a large amount of memory, must implement cache consistency to prevent two processors from trying to access or rewrite a memory location that might be required by another processor. The move to highly parallel processing has made some inroads into multi-core accessing cache and consistency, yet they have used relatively little memory space. Expanding shared memory to multiple TBs of data adds more complexity to the change in architecture. Our research is aimed at understanding what it takes to transition to a memory-centric architecture.
Non-volatile memory
Currently two different memory technologies are used as memory accessed by the CPU, SRAM and DRAM. Both are volatile, which means that the stored data will be lost when the power fails or shuts down. A new class of memory is being developed (persistent or storage class memory, SCM) that is non-volatile, that is, data does not disappear when power is lost.
Memory located close to the CPU requires fast access time, or latency close to the speed of the CPU clock, to be used as the CPU cache. The latency of the SSD and HDD data storage devices is too slow to be used. Differentiating memory from storage is key when discussing this topic (here, memory refers to bit-addressable memory used for registers and the cache accessed by the CPU, while storage refers to devices that use systems for organizing and storing data, and is not bit addressable). Some low-latency non-volatile memory designs can compete with DRAM: these are called MRAM (Magnetoresistive Random Access Memory) or STT-MRAM (Spin Transfer Torque Magnetoresistive Random Access Memory). Other technologies that are between DRAM and SSD in latency are PCM (Phase Change Memory) or ReRAM (Resistive Random Access Memory). Western Digital has researched non-volatile memory for several years, from basic materials research (including cell physics and design) to the fabrication and testing of memory cell arrays. All of these efforts will come together when a product is launched in this emerging market. These three areas are the "hardware" side of this technology. Other research focuses on the "software" implications of the transition to non-volatile memory.