#MemoryHierarchy
Explore tagged Tumblr posts
Link
๐ง ๐พ Ever wondered how your computer knows where to find your data in a flash?
In our latest deep dive, we explore the Cache & Main Memory Hierarchy โ from the blazing-fast CPU cache to RAM and finally down to SSD/NVMe storage. โก
๐น Discover how L1/L2/L3 cache works ๐น Understand RAM vs. storage bottlenecks ๐น Learn how data flows from the chip to the disk ๐น See why memory hierarchy = faster performance
Perfect for tech lovers, PC builders, and curious minds! ๐ ๐ Read now to level up your knowledge on how your system really works.
#TechExplained#MemoryHierarchy#ComputerArchitecture#RAM#Cache#SSD#StorageTech#TumblrTech#FastComputing#PCBuilds#TechEducation#GeekStuff
0 notes
Text
Today i learned...
How to make faster memories
While computer processor's speed has increased esponentially in the last 30 years, reaching millions of clock cycles per second, the same can't be said for memories. In fact, the access time of DRAM volatile memories is one or two order of magnitudes slower than the clock cycle time.

From gameprogrammingpatterns.com
Clock cycle: the time that a processor spends to fetch an instruction from memory, decode it and execute it.
There are multiple types of memories. A distinction can be made between volatile memories, that lose their data when the computer is turned off, and permanent memories.RAM ( or Random Access Memories) are volatile, while ROM is permanent.
SRAM (Static Ram) is the fastest type of memory, but also the most expensive one: it's 5000$/GB to make.
DRAM implements the main memory of most computers. It's a compromise between price, dimensions and speed.
The computer's hard disk is made of ROM, and that's where the data are saved when you turn it off. ROMs only cost between $0.40 and $0.05, but they are also extremely slow, allowing less than 1 GB per second transfer times.

Here's the thing: having a really fast processor is going to be useless if it has to wait for several cycles to load and store data in memory.
Caching
One part of the solution is introducing a cache: a small-sized, fast memory (SRAM), where we can put the data the processor uses most often, or is more likely to use next.
There are at least three levels of memory: the cache, the main memory, and the hard disk. When the processor requests a piece of data, the cache is the first place to search. If it's there, a cache hit occurs. Otherwise, it's a cache miss, and it goes down a level to look for the data in the main memory. If it's still not found, it's time to look for it in the hard disk.

Since cache capacity is largely inferior to main memory, the computer engeneer has to decide which subset of the main memory should be stored on cache. When a cache miss occurs, it must replace the old data with the new one. The two most common strategies are LRU (Last Recently Used), and random replacement. This decision is based on the principles of temporal and spatial locality, which state that the processor is probably going to reuse data that it has just used, or it's going to use the data next to it.
Types of cache

A cache is subdivided into blocks and sets. Blocks are chunks of the main memory which usually comprise multiple words the size of an integer number. Sets are groups of blocks. Caches have a mapping function that associates every address of the main memory to a set in the cache. So, a memory address will always be put into the same set, but may be put in different blocks of that set.
A direct mapped cache has as many sets as it has blocks. This means that a chunk of memory will always be put in the same cache block. A fully associative cache has only one set, so a piece of data can be put in any block, regardless of its memory address. In a set associative memory, a set contains multiple blocks, and the cache contains multiple sets. The set associative memory is a compromise between the two other types.
The fully associative memory has a lower miss rate, but it's heavier to implement. Having larger blocks increases spatial locality, but it also increases the penalty in case of miss. Direct mapped cache is easy to implement, but it's too rigid, reducing temporal locality. At the end, the best course of action has to be valued case by case.

Your computer probably has many levels of caches, and more than one cache for CPU.
I post the things i learn on social media to document it. This is not the original TIL (rip).
2 notes
ยท
View notes
Text
Primary Memory vs Secondary Memory: Understanding the Differences

View On WordPress
0 notes