Computational RAM (C-RAM) is random-access memory with processing elements integrated on the same chip. This enables C-RAM to be used as a SIMD computer. It also can be used to more efficiently use memory bandwidth within a memory chip. The general technique of doing computations in memory is called Processing-In-Memory (PIM).
Reconfigurable Architecture DRAM (RADram) is DRAM with reconfigurable computingFPGA logic elements integrated on the same chip.[2]
SimpleScalar simulations show that RADram (in a system with a conventional processor) can give orders of magnitude better performance on some problems than traditional DRAM (in a system with the same processor).
Some embarrassingly parallel computational problems are already limited by the von Neumann bottleneck between the CPU and the DRAM.
Some researchers expect that, for the same total cost, a machine built from computational RAM will run orders of magnitude faster than a traditional general-purpose computer on these kinds of problems.[3]
As of 2011, the "DRAM process" (few layers; optimized for high capacitance) and the "CPU process" (optimized for high frequency; typically twice as many BEOL layers as DRAM; since each additional layer reduces yield and increases manufacturing cost, such chips are relatively expensive per square millimeter compared to DRAM) is distinct enough that there are three approaches to computational RAM:
starting with a CPU-optimized process and a device that uses much embedded SRAM, add an additional process step (making it even more expensive per square millimeter) to allow replacing the embedded SRAM with embedded DRAM (eDRAM), giving ≈3x area savings on the SRAM areas (and so lowering net cost per chip).
starting with a system with a separate CPU chip and DRAM chip(s), add small amounts of "coprocessor" computational ability to the DRAM, working within the limits of the DRAM process and adding only small amounts of area to the DRAM, to do things that would otherwise be slowed down by the narrow bottleneck between CPU and DRAM: zero-fill selected areas of memory, copy large blocks of data from one location to another, find where (if anywhere) a given byte occurs in some block of data, etc. The resulting system—the unchanged CPU chip, and "smart DRAM" chip(s)—is at least as fast as the original system, and potentially slightly lower in cost. The cost of the small amount of extra area is expected to be more than paid back in savings in expensive test time, since there is now enough computational capability on a "smart DRAM" for a wafer full of DRAM to do most testing internally in parallel, rather than the traditional approach of fully testing one DRAM chip at a time with an expensive external automatic test equipment.[1]
starting with a DRAM-optimized process, tweak the process to make it slightly more like the "CPU process", and build a (relatively low-frequency, but low-power and very high bandwidth) general-purpose CPU within the limits of that process.
Some CPUs designed to be built on a DRAM process technology (rather than a "CPU" or "logic" process technology specifically optimized for CPUs) include
The Berkeley IRAM Project, TOMI Technology[4][5]
and the AT&T DSP1.
Because a memory bus to off-chip memory has many times the capacitance of an on-chip memory bus, a system with separate DRAM and CPU chips can have several times the energy consumption of an IRAM system with the same computer performance.
[1]
Because computational DRAM is expected to run hotter than traditional DRAM,
and increased chip temperatures result in faster charge leakage from the DRAM storage cells,
computational DRAM is expected to require more frequent DRAM refresh.
[2]
The chief goal of merging the processing and memory components in this way is to reduce memory latency and increase bandwidth. Alternatively reducing the distance that data needs to be moved reduces the power requirements of a system.[6] Much of the complexity (and hence power consumption) in current processors stems from strategies to deal with avoiding memory stalls.
DRAM-based near-memory and in-memory designs can be categorized into four groups:
DIMM-level approaches place the processing units near memory chips. These approaches require minimal/no change in the data layout(e.g., Chameleon,[9] and RecNMP [10] ).
Logic-layer-level approaches embed processing units in the logic layer of 3D stack memories and can benefit from the high bandwidth of 3D stack memories (e.g., TOP_PIM [11])
Bank-level approaches place processing units inside the memory layers, near each bank. UPMEM and Samsung's PIM [12] are examples of these approaches
Subarray-level approaches process data inside each subarray. The Subarray-level approaches provide the highest access parallelism but often perform only simple operations, such as bitwise operations on an entire memory row (e.g., DRISA [13]) or sequential processing of the memory row using a single-world ALU (e.g., Fulcrum [14])
^ abc
Christoforos E. Kozyrakis,
Stylianos Perissakis,
David Patterson,
Thomas Anderson, et al.
"Scalable Processors in the Billion-Transistor Era: IRAM".
IEEE Computer (magazine).
1997.
says
"Vector IRAM ...
can operate as a parallel built-in self-test engine for
the memory array, significantly reducing the DRAM
testing time and the associated cost."
^
Yong-Bin Kim and Tom W. Chen.
"Assessing Merged DRAM/Logic Technology".
1998.
"Archived copy"(PDF). Archived from the original(PDF) on 2011-07-25. Retrieved 2011-11-27.{{cite web}}: CS1 maint: archived copy as title (link)[1]