Memory Systems in Modern ASIC/SoCs
Arithmetic and logical operations in computing systems are executed by the Central Processing Unit (CPU). Modern Systems-on-Chip (SoCs) integrate CPU cores from architectures such as ARM, Intel, or MIPS. These cores process digital data stored in memory and generate results that are ultimately saved to another memory location. To understand the types of memory required in a computing system, consider this analogy:
The Analogy of Memory as a Scratchpad Solving a complex math problem typically requires intermediate steps on paper, even if only the final answer matters. Similarly, a processor needs volatile memory (temporary workspace) to execute operations and derive results. Once finalized, data is stored in non-volatile memory (e.g., SSDs, hard drives) for long-term retention.
Hierarchy of Volatile Memory 1. Registers (Flip-Flops):
2. On-Chip Cache (SRAM):
Recommended by LinkedIn
3. Off-Chip DRAM (DDR Modules):
Cache Architecture and Memory Access When the CPU requests data, it first checks registers, then the L1 cache, followed by L2/L3 caches. If the data is absent, the DDR controller accesses off-chip DRAM. The memory hierarchy ensures that frequently used data resides in faster, closer storage (registers → cache → DRAM).
System-Level Optimization Optimal memory performance in SoCs hinges on:
In summary, modern computing systems rely on a synergistic combination of registers, on-chip SRAM caches, sophisticated DDR controllers, and high-density DRAM modules to deliver the speed and capacity required for complex applications.
AI is dominating ASIC manufacturing. AI is eager to see, a real, In Memory Computing emergence. Some words on MRAM, ReRAM expectation and limits?