Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
3
3945374
  • Project overview
    • Project overview
    • Details
    • Activity
  • Issues 83
    • Issues 83
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
  • Merge Requests 0
    • Merge Requests 0
  • CI / CD
    • CI / CD
    • Pipelines
    • Jobs
    • Schedules
  • Operations
    • Operations
    • Incidents
    • Environments
  • Packages & Registries
    • Packages & Registries
    • Package Registry
  • Analytics
    • Analytics
    • CI / CD
    • Value Stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Collapse sidebar
  • Activity
  • Create a new issue
  • Jobs
  • Issue Boards
  • Bennie Tan
  • 3945374
  • Issues
  • #38

Closed
Open
Opened Sep 16, 2025 by Bennie Tan@bennietan84532Maintainer
  • Report abuse
  • New issue
Report abuse New issue

On-line Mass Storage - Secondary Storage


In computer architecture, the memory hierarchy separates computer storage into a hierarchy primarily based on response time. Since response time, complexity, and capacity are related, the levels might even be distinguished by their efficiency and controlling applied sciences. Memory hierarchy impacts efficiency in computer architectural design, algorithm predictions, and decrease degree programming constructs involving locality of reference. Designing for top performance requires contemplating the restrictions of the memory hierarchy, i.e. the dimensions and capabilities of every element. 1 of the hierarchy. To restrict waiting by larger levels, a decrease level will respond by filling a buffer after which signaling for activating the transfer. There are four major storage levels. Internal - processor registers and cache. Principal - the system RAM and controller playing cards. On-line mass storage - secondary storage. Off-line bulk storage - tertiary and off-line storage. It is a basic memory hierarchy structuring. Many different structures are useful. For example, a paging algorithm may be considered as a level for digital memory when designing a computer architecture, and one can include a degree of nearline storage between on-line and offline storage.


Adding complexity slows the Memory Wave hierarchy. One of the principle ways to increase system efficiency is minimising how far down the memory hierarchy one has to go to govern data. Latency and bandwidth are two metrics related to caches. Neither of them is uniform, but is particular to a selected part of the memory hierarchy. Predicting where within the memory hierarchy the information resides is troublesome. The location within the memory hierarchy dictates the time required for the prefetch to occur. The variety of levels in the memory hierarchy and the performance at each level has increased over time. The type of memory or storage elements additionally change traditionally. Processor registers - the fastest doable access (usually 1 CPU cycle). A few thousand bytes in size. Best entry velocity is round seven-hundred GB/s. Best access velocity is round 200 GB/s. Finest access velocity is round 100 GB/s. Greatest entry speed is round 40 GB/s. The decrease ranges of the hierarchy - from mass storage downwards - are often known as tiered storage.


On-line storage is instantly accessible for I/O. Nearline storage will not be immediately accessible, but could be made online rapidly without human intervention. Offline storage isn't instantly accessible, and requires some human intervention to bring on-line. For Memory Wave example, always-on spinning disks are on-line, while spinning disks that spin down, similar to massive arrays of idle disk (MAID), are nearline. Removable media akin to tape cartridges that can be mechanically loaded, as in a tape library, are nearline, while cartridges that have to be manually loaded are offline. Because of this, the CPU spends much of its time idling, ready for neural entrainment audio memory I/O to complete. That is generally known as the house cost, as a larger memory object is more likely to overflow a small and fast stage and require use of a larger, slower level. The ensuing load on memory use is known as strain (respectively register pressure, cache pressure, and (primary) memory pressure).


Terms for data being missing from the next stage and needing to be fetched from a lower stage are, respectively: register spilling (as a consequence of register stress: register to cache), cache miss (cache to main memory), and (hard) page fault (actual primary memory to virtual memory, i.e. mass storage, commonly referred to as disk regardless of the particular mass storage expertise used). Modern programming languages primarily assume two ranges of memory, primary (working) memory and mass storage, though in assembly language and inline assemblers in languages reminiscent of C, registers could be immediately accessed. Programmers are liable for transferring knowledge between disk and memory by file I/O. Hardware is answerable for transferring information between memory and caches. Optimizing compilers are accountable for generating code that, when executed, will cause the hardware to use caches and registers effectively. Many programmers assume one degree of memory. This works advantageous until the appliance hits a efficiency wall. Then the memory hierarchy will be assessed throughout code refactoring. Toy, Wing; Zee, Benjamin (1986). Pc Hardware/Software Architecture. Pas, Ruud (2002). "Memory Hierarchy in Cache-Primarily based Techniques" (PDF). Crothers, Brooke. "Dissecting Intel's high graphics in Apple's 15-inch MacBook Professional - CNET". Pearson, Tony (2010). "Right use of the term Nearline". IBM Developerworks, Inside System Storage.

Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking
None
Due date
None
Reference: bennietan84532/3945374#38