Digital memory terms and concepts.doc
-
Upload
ravisankarjk -
Category
Documents
-
view
217 -
download
0
Transcript of Digital memory terms and concepts.doc
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 1/36
Digital memory terms and concepts
When we store information in some kind of circuit or device, we not only need some wayto store and retrieve it, but also to locate precisely where in the device that it is. Most, if
not all, memory devices can be thought of as a series of mail boxes, folders in a filecabinet, or some other metaphor where information can be located in a variety of places.
When we refer to the actual information being stored in the memory device, we usuallyrefer to it as the data. The location of this data within the storage device is typically
called the address, in a manner reminiscent of the postal service.
With some types of memory devices, the address in which certain data is stored can becalled up by means of parallel data lines in a digital circuit (we'll discuss this in more
detail later in this lesson). With other types of devices, data is addressed in terms of an
actual physical location on the surface of some type of media (the tracks and sectors of
circular computer disks, for instance). However, some memory devices such as magnetic
tapes have a one-dimensional type of data addressing: if you want to play your favoritesong in the middle of a cassette tape album, you have to fast-forward to that spot in the
tape, arriving at the proper spot by means of trial-and-error, judging the approximate area by means of a counter that keeps track of tape position, and/or by the amount of time it
takes to get there from the beginning of the tape. The access of data from a storage device
falls roughly into two categories: random access and sequential access. Random accessmeans that you can quickly and precisely address a specific data location within the
device, and non-random simply means that you cannot. A vinyl record platter is an
example of a random-access device: to skip to any song, you just position the stylus arm
at whatever location on the record that you want (compact audio disks so the same thing,only they do it automatically for you). Cassette tape, on the other hand, is sequential. You
have to wait to go past the other songs in sequence before you can access or address thesong that you want to skip to.
The process of storing a piece of data to a memory device is called writing , and the
process of retrieving data is called reading . Memory devices allowing both reading and
writing are equipped with a way to distinguish between the two tasks, so that no mistake
is made by the user (writing new information to a device when all you wanted to do is seewhat was stored there). Some devices do not allow for the writing of new data, and are
purchased "pre-written" from the manufacturer. Such is the case for vinyl records and
compact audio disks, and this is typically referred to in the digital world as read-onlymemory, or ROM. Cassette audio and video tape, on the other hand, can be re-recorded
(re-written) or purchased blank and recorded fresh by the user. This is often called read-
write memory.
Another distinction to be made for any particular memory technology is its volatility, or data storage permanence without power. Many electronic memory devices store binary
data by means of circuits that are either latched in a "high" or "low" state, and this
latching effect holds only as long as electric power is maintained to those circuits. Suchmemory would be properly referred to as volatile. Storage media such as magnetized disk
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 2/36
or tape is nonvolatile, because no source of power is needed to maintain data storage.
This is often confusing for new students of computer technology, because the volatile
electronic memory typically used for the construction of computer devices is commonlyand distinctly referred to as RAM (R andom Access Memory). While RAM memory is
typically randomly-accessed, so is virtually every other kind of memory device in the
computer! What "RAM" really refers to is the volatility of the memory, and not its modeof access. Nonvolatile memory integrated circuits in personal computers are commonly
(and properly) referred to as ROM (R ead-Only Memory), but their data contents are
accessed randomly, just like the volatile memory circuits!
Finally, there needs to be a way to denote how much data can be stored by any particular memory device. This, fortunately for us, is very simple and straightforward: just count up
the number of bits (or bytes, 1 byte = 8 bits) of total data storage space. Due to the high
capacity of modern data storage devices, metric prefixes are generally affixed to the unitof bytes in order to represent storage space: 1.6 Gigabytes is equal to 1.6 billion bytes, or
12.8 billion bits, of data storage capacity. The only caveat here is to be aware of rounded
numbers. Because the storage mechanisms of many random-access memory devices aretypically arranged so that the number of "cells" in which bits of data can be stored
appears in binary progression (powers of 2), a "one kilobyte" memory device most likely
contains 1024 (2 to the power of 10) locations for data bytes rather than exactly 1000. A
"64 kbyte" memory device actually holds 65,536 bytes of data (2 to the 16th power), andshould probably be called a "66 Kbyte" device to be more precise. When we round
numbers in our base-10 system, we fall out of step with the round equivalents in the base-
2 system.
Modern nonmechanical memory
Now we can proceed to studying specific types of digital storage devices. To start, I want
to explore some of the technologies which do not require any moving parts. These are notnecessarily the newest technologies, as one might suspect, although they will most likely
replace moving-part technologies in the future.
A very simple type of electronic memory is the bistable multivibrator. Capable of storing
a single bit of data, it is volatile (requiring power to maintain its memory) and very fast.The D-latch is probably the simplest implementation of a bistable multivibrator for
memory usage, the D input serving as the data "write" input, the Q output serving as the
"read" output, and the enable input serving as the read/write control line:
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 3/36
If we desire more than one bit's worth of storage (and we probably do), we'll have to have
many latches arranged in some kind of an array where we can selectively address which
one (or which set) we're reading from or writing to. Using a pair of tristate buffers, wecan connect both the data write input and the data read output to a common data bus line,
and enable those buffers to either connect the Q output to the data line (READ), connect
the D input to the data line (WRITE), or keep both buffers in the High-Z state to
disconnect D and Q from the data line (unaddressed mode). One memory "cell" would
look like this, internally:
When the address enable input is 0, both tristate buffers will be placed in high-Z mode,
and the latch will be disconnected from the data input/output (bus) line. Only when the
address enable input is active (1) will the latch be connected to the data bus. Every latchcircuit, of course, will be enabled with a different "address enable" (AE) input line, which
will come from a 1-of-n output decoder:
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 4/36
In the above circuit, 16 memory cells are individually addressed with a 4-bit binary codeinput into the decoder. If a cell is not addressed, it will be disconnected from the 1-bit
data bus by its internal tristate buffers: consequently, data cannot be either written or read
through the bus to or from that cell. Only the cell circuit that is addressed by the 4-bitdecoder input will be accessible through the data bus.
This simple memory circuit is random-access and volatile. Technically, it is known as a static RAM . Its total memory capacity is 16 bits. Since it contains 16 addresses and has a
data bus that is 1 bit wide, it would be designated as a 16 x 1 bit static RAM circuit. Asyou can see, it takes an incredible number of gates (and multiple transistors per gate!) to
construct a practical static RAM circuit. This makes the static RAM a relatively low-
density device, with less capacity than most other types of RAM technology per unit ICchip space. Because each cell circuit consumes a certain amount of power, the overall
power consumption for a large array of cells can be quite high. Early static RAM banks
in personal computers consumed a fair amount of power and generated a lot of heat, too.CMOS IC technology has made it possible to lower the specific power consumption of
static RAM circuits, but low storage density is still an issue.
To address this, engineers turned to the capacitor instead of the bistable multivibrator as a
means of storing binary data. A tiny capacitor could serve as a memory cell, completewith a single MOSFET transistor for connecting it to the data bus for charging (writing a
1), discharging (writing a 0), or reading. Unfortunately, such tiny capacitors have very
small capacitances, and their charge tends to "leak" away through any circuit impedancesquite rapidly. To combat this tendency, engineers designed circuits internal to the RAM
memory chip which would periodically read all cells and recharge (or "refresh") the
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 5/36
capacitors as needed. Although this added to the complexity of the circuit, it still required
far less componentry than a RAM built of multivibrators. They called this type of
memory circuit a dynamic RAM , because of its need of periodic refreshing.
Recent advances in IC chip manufacturing has led to the introduction of flash memory,
which works on a capacitive storage principle like the dynamic RAM, but uses theinsulated gate of a MOSFET as the capacitor itself.
Before the advent of transistors (especially the MOSFET), engineers had to implementdigital circuitry with gates constructed from vacuum tubes. As you can imagine, the
enormous comparative size and power consumption of a vacuum tube as compared to a
transistor made memory circuits like static and dynamic RAM a practical impossibility.Other, rather ingenious, techniques to store digital data without the use of moving parts
were developed.
Historical, nonmechanical memorytechnologies
Perhaps the most ingenious technique was that of the delay line. A delay line is any kind
of device which delays the propagation of a pulse or wave signal. If you've ever heard asound echo back and forth through a canyon or cave, you've experienced an audio delay
line: the noise wave travels at the speed of sound, bouncing off of walls and reversing
direction of travel. The delay line "stores" data on a very temporary basis if the signal isnot strengthened periodically, but the very fact that it stores data at all is a phenomenon
exploitable for memory technology.
Early computer delay lines used long tubes filled with liquid mercury, which was used as
the physical medium through which sound waves traveled along the length of the tube.An electrical/sound transducer was mounted at each end, one to create sound waves from
electrical impulses, and the other to generate electrical impulses from sound waves. A
stream of serial binary data was sent to the transmitting transducer as a voltage signal.
The sequence of sound waves would travel from left to right through the mercury in thetube and be received by the transducer at the other end. The receiving transducer would
receive the pulses in the same order as they were transmitted:
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 6/36
A feedback circuit connected to the receiving transducer would drive the transmitting
transducer again, sending the same sequence of pulses through the tube as sound waves,
storing the data as long as the feedback circuit continued to function. The delay linefunctioned like a first-in-first-out (FIFO) shift register, and external feedback turned that
shift register behavior into a ring counter, cycling the bits around indefinitely.
The delay line concept suffered numerous limitations from the materials and technology
that were then available. The EDVAC computer of the early 1950's used 128 mercury-filled tubes, each one about 5 feet long and storing a maximum of 384 bits. Temperature
changes would affect the speed of sound in the mercury, thus skewing the time delay in
each tube and causing timing problems. Later designs replaced the liquid mercurymedium with solid rods of glass, quartz, or special metal that delayed torsional (twisting)
waves rather than longitudinal (lengthwise) waves, and operated at much higher
frequencies.
One such delay line used a special nickel-iron-titanium wire (chosen for its good
temperature stability) about 95 feet in length, coiled to reduce the overall package size.The total delay time from one end of the wire to the other was about 9.8 milliseconds,
and the highest practical clock frequency was 1 MHz. This meant that approximately9800 bits of data could be stored in the delay line wire at any given time. Given different
means of delaying signals which wouldn't be so susceptible to environmental variables
(such as serial pulses of light within a long optical fiber), this approach might someday
find re-application.
Another approach experimented with by early computer engineers was the use of a
cathode ray tube (CRT), the type commonly used for oscilloscope, radar, and television
viewscreens, to store binary data. Normally, the focused and directed electron beam in a
CRT would be used to make bits of phosphor chemical on the inside of the tube glow,thus producing a viewable image on the screen. In this application, however, the desired
result was the creation of an electric charge on the glass of the screen by the impact of the
electron beam, which would then be detected by a metal grid placed directly in front of the CRT. Like the delay line, the so-called Williams Tube memory needed to be
periodically refreshed with external circuitry to retain its data. Unlike the delay line
mechanisms, it was virtually immune to the environmental factors of temperature andvibration. The IBM model 701 computer sported a Williams Tube memory with 4
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 7/36
Kilobyte capacity and a bad habit of "overcharging" bits on the tube screen with
successive re-writes so that false "1" states might overflow to adjacent spots on the
screen.
The next major advance in computer memory came when engineers turned to magnetic
materials as a means of storing binary data. It was discovered that certain compounds of iron, namely "ferrite," possessed hysteresis curves that were almost square:
Shown on a graph with the strength of the applied magnetic field on the horizontal axis
( field intensity), and the actual magnetization (orientation of electron spins in the ferritematerial) on the vertical axis ( flux density), ferrite won't become magnetized one
direction until the applied field exceeds a critical threshold value. Once that critical value
is exceeded, the electrons in the ferrite "snap" into magnetic alignment and the ferrite
becomes magnetized. If the applied field is then turned off, the ferrite maintains fullmagnetism. To magnetize the ferrite in the other direction (polarity), the applied magnetic
field must exceed the critical value in the opposite direction. Once that critical value is
exceeded, the electrons in the ferrite "snap" into magnetic alignment in the oppositedirection. Once again, if the applied field is then turned off, the ferrite maintains full
magnetism. To put it simply, the magnetization of a piece of ferrite is "bistable."
Exploiting this strange property of ferrite, we can use this natural magnetic "latch" to
store a binary bit of data. To set or reset this "latch," we can use electric current through awire or coil to generate the necessary magnetic field, which will then be applied to the
ferrite. Jay Forrester of MIT applied this principle in inventing the magnetic "core"
memory, which became the dominant computer memory technology during the 1970's.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 8/36
A grid of wires, electrically insulated from one another, crossed through the center of many ferrite rings, each of which being called a "core." As DC current moved through
any wire from the power supply to ground, a circular magnetic field was generated
around that energized wire. The resistor values were set so that the amount of current at
the regulated power supply voltage would produce slightly more than 1/2 the criticalmagnetic field strength needed to magnetize any one of the ferrite rings. Therefore, if
column #4 wire was energized, all the cores on that column would be subjected to the
magnetic field from that one wire, but it would not be strong enough to change themagnetization of any of those cores. However, if column #4 wire and row #5 wire were
both energized, the core at that intersection of column #4 and row #5 would be subjected
to a sum of those two magnetic fields: a magnitude strong enough to "set" or "reset" themagnetization of that core. In other words, each core was addressed by the intersection of
row and column. The distinction between "set" and "reset" was the direction of the core's
magnetic polarity, and that bit value of data would be determined by the polarity of thevoltages (with respect to ground) that the row and column wires would be energized with.
The following photograph shows a core memory board from a Data General brand,
"Nova" model computer, circa late 1960's or early 1970's. It had a total storage capacity
of 4 kbytes (that's kilo bytes, not mega bytes!). A ball-point pen is shown for sizecomparison:
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 9/36
The electronic components seen around the periphery of this board are used for "driving"
the column and row wires with current, and also to read the status of a core. A close-up photograph reveals the ring-shaped cores, through which the matrix wires thread. Again,
a ball-point pen is shown for size comparison:
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 10/36
A core memory board of later design (circa 1971) is shown in the next photograph. Its
cores are much smaller and more densely packed, giving more memory storage capacity
than the former board (8 kbytes instead of 4 kbytes):
And, another close-up of the cores:
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 11/36
Writing data to core memory was easy enough, but reading that data was a bit of a trick.
To facilitate this essential function, a "read" wire was threaded through all the cores in a
memory matrix, one end of it being grounded and the other end connected to an amplifier circuit. A pulse of voltage would be generated on this "read" wire if the addressed core
changed states (from 0 to 1, or 1 to 0). In other words, to read a core's value, you had to
write either a 1 or a 0 to that core and monitor the voltage induced on the read wire to seeif the core changed. Obviously, if the core's state was changed, you would have to re-set
it back to its original state, or else the data would have been lost. This process is known
as a destructive read , because data may be changed (destroyed) as it is read. Thus,refreshing is necessary with core memory, although not in every case (that is, in the case
of the core's state not changing when either a 1 or a 0 was written to it).
One major advantage of core memory over delay lines and Williams Tubes was
nonvolatility. The ferrite cores maintained their magnetization indefinitely, with no power or refreshing required. It was also relatively easy to build, denser, and physically
more rugged than any of its predecessors. Core memory was used from the 1960's until
the late 1970's in many computer systems, including the computers used for the Apollospace program, CNC machine tool control computers, business ("mainframe") computers,
and industrial control systems. Despite the fact that core memory is long obsolete, the
term "core" is still used sometimes with reference to a computer's RAM memory.
All the while that delay lines, Williams Tube, and core memory technologies were beinginvented, the simple static RAM was being improved with smaller active component
(vacuum tube or transistor) technology. Static RAM was never totally eclipsed by its
competitors: even the old ENIAC computer of the 1950's used vacuum tube ring-counter
circuitry for data registers and computation. Eventually though, smaller and smaller scaleIC chip manufacturing technology gave transistors the practical edge over other
technologies, and core memory became a museum piece in the 1980's.
One last attempt at a magnetic memory better than core was the bubble memory. Bubblememory took advantage of a peculiar phenomenon in a mineral called garnet , which,
when arranged in a thin film and exposed to a constant magnetic field perpendicular to
the film, supported tiny regions of oppositely-magnetized "bubbles" that could be nudged
along the film by prodding with other external magnetic fields. "Tracks" could be laid onthe garnet to focus the movement of the bubbles by depositing magnetic material on the
surface of the film. A continuous track was formed on the garnet which gave the bubbles
a long loop in which to travel, and motive force was applied to the bubbles with a pair of wire coils wrapped around the garnet and energized with a 2-phase voltage. Bubbles
could be created or destroyed with a tiny coil of wire strategically placed in the bubbles'
path.
The presence of a bubble represented a binary "1" and the absence of a bubblerepresented a binary "0." Data could be read and written in this chain of moving magnetic
bubbles as they passed by the tiny coil of wire, much the same as the read/write "head" in
a cassette tape player, reading the magnetization of the tape as it moves. Like corememory, bubble memory was nonvolatile: a permanent magnet supplied the necessary
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 12/36
background field needed to support the bubbles when the power was turned off. Unlike
core memory, however, bubble memory had phenomenal storage density: millions of bits
could be stored on a chip of garnet only a couple of square inches in size. What killed bubble memory as a viable alternative to static and dynamic RAM was its slow,
sequential data access. Being nothing more than an incredibly long serial shift register
(ring counter), access to any particular portion of data in the serial string could be quiteslow compared to other memory technologies.
An electrostatic equivalent of the bubble memory is the Charge-Coupled Device (CCD)
memory, an adaptation of the CCD devices used in digital photography. Like bubble
memory, the bits are serially shifted along channels on the substrate material by clock pulses. Unlike bubble memory, the electrostatic charges decay and must be refreshed.
CCD memory is therefore volatile, with high storage density and sequential access.
Interesting, isn't it? The old Williams Tube memory was adapted from CRT viewing technology, and CCD memory from video recording technology.
Read-only memoryRead-only memory (ROM) is similar in design to static or dynamic RAM circuits, except
that the "latching" mechanism is made for one-time (or limited) operation. The simplesttype of ROM is that which uses tiny "fuses" which can be selectively blown or left alone
to represent the two binary states. Obviously, once one of the little fuses is blown, it
cannot be made whole again, so the writing of such ROM circuits is one-time only.Because it can be written (programmed) once, these circuits are sometimes referred to as
PROMs (Programmable Read-Only Memory).
However, not all writing methods are as permanent as blown fuses. If a transistor latchcan be made which is resettable only with significant effort, a memory device that'ssomething of a cross between a RAM and a ROM can be built. Such a device is given a
rather oxymoronic name: the EPROM (Erasable Programmable Read-Only Memory).
EPROMs come in two basic varieties: Electrically-erasable (EEPROM) and Ultraviolet-erasable (UV/EPROM). Both types of EPROMs use capacitive charge MOSFET devices
to latch on or off. UV/EPROMs are "cleared" by long-term exposure to ultraviolet light.
They are easy to identify: they have a transparent glass window which exposes the siliconchip material to light. Once programmed, you must cover that glass window with tape to
prevent ambient light from degrading the data over time. EPROMs are often programmed
using higher signal voltages than what is used during "read-only" mode.
Memory with moving parts: "Drives"
The earliest forms of digital data storage involving moving parts was that of the punched
paper card. Joseph Marie Jacquard invented a weaving loom in 1780 which automatically
followed weaving instructions set by carefully placed holes in paper cards. This sametechnology was adapted to electronic computers in the 1950's, with the cards being read
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 13/36
mechanically (metal-to-metal contact through the holes), pneumatically (air blown
through the holes, the presence of a hole sensed by air nozzle backpressure), or optically
(light shining through the holes).
An improvement over paper cards is the paper tape, still used in some industrial
environments (notably the CNC machine tool industry), where data storage and speeddemands are low and ruggedness is highly valued. Instead of wood-fiber paper, mylar
material is often used, with optical reading of the tape being the most popular method.
Magnetic tape (very similar to audio or video cassette tape) was the next logical
improvement in storage media. It is still widely used today, as a means to store "backup"
data for archiving and emergency restoration for other, faster methods of data storage.Like paper tape, magnetic tape is sequential access, rather than random access. In early
home computer systems, regular audio cassette tape was used to store data in modulated
form, the binary 1's and 0's represented by different frequencies (similar to FSK data
communication). Access speed was terribly slow (if you were reading ASCII text from
the tape, you could almost keep up with the pace of the letters appearing on thecomputer's screen!), but it was cheap and fairly reliable.
Tape suffered the disadvantage of being sequential access. To address this weak point,
magnetic storage "drives" with disk- or drum-shaped media were built. An electric motor provided constant-speed motion. A movable read/write coil (also known as a "head") was
provided which could be positioned via servo-motors to various locations on the height of
the drum or the radius of the disk, giving access that is almost random (you might stillhave to wait for the drum or disk to rotate to the proper position once the read/write coil
has reached the right location).
The disk shape lent itself best to portable media, and thus the floppy disk was born.Floppy disks (so-called because the magnetic media is thin and flexible) were originallymade in 8-inch diameter formats. Later, the 5-1/4 inch variety was introduced, which was
made practical by advances in media particle density. All things being equal, a larger disk
has more space upon which to write data. However, storage density can be improved bymaking the little grains of iron-oxide material on the disk substrate smaller. Today, the 3-
1/2 inch floppy disk is the preeminent format, with a capacity of 1.44 Mbytes (2.88
Mbytes on SCSI drives). Other portable drive formats are becoming popular, withIoMega's 100 Mbyte "ZIP" and 1 Gbyte "JAZ" disks appearing as original equipment on
some personal computers.
Still, floppy drives have the disadvantage of being exposed to harsh environments, being
constantly removed from the drive mechanism which reads, writes, and spins the media.The first disks were enclosed units, sealed from all dust and other particulate matter, and
were definitely not portable. Keeping the media in an enclosed environment allowed
engineers to avoid dust altogether, as well as spurious magnetic fields. This, in turn,allowed for much closer spacing between the head and the magnetic material, resulting in
a much tighter-focused magnetic field to write data to the magnetic material.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 14/36
The following photograph shows a hard disk drive "platter" of approximately 30 Mbytes
storage capacity. A ball-point pen has been set near the bottom of the platter for size
reference:
Modern disk drives use multiple platters made of hard material (hence the name, "hard
drive") with multiple read/write heads for every platter. The gap between head and platter
is much smaller than the diameter of a human hair. If the hermetically-sealed
environment inside a hard disk drive is contaminated with outside air, the hard drive will be rendered useless. Dust will lodge between the heads and the platters, causing damage
to the surface of the media.
Here is a hard drive with four platters, although the angle of the shot only allows viewingof the top platter. This unit is complete with drive motor, read/write heads, and associated
electronics. It has a storage capacity of 340 Mbytes, and is about the same length as the
ball-point pen shown in the previous photograph:
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 15/36
While it is inevitable that non-moving-part technology will replace mechanical drives in
the future, current state-of-the-art electromechanical drives continue to rival "solid-state"nonvolatile memory devices in storage density, and at a lower cost. In 1998, a 250 Mbyte
hard drive was announced that was approximately the size of a quarter (smaller than the
metal platter hub in the center of the last hard disk photograph)! In any case, storage
density and reliability will undoubtedly continue to improve.
An incentive for digital data storage technology advancement was the advent of digitallyencoded music. A joint venture between Sony and Phillips resulted in the release of the
"compact audio disc" (CD) to the public in the late 1980's. This technology is a read-onlytype, the media being a transparent plastic disc backed by a thin film of aluminum.
Binary bits are encoded as pits in the plastic which vary the path length of a low-power
laser beam. Data is read by the low-power laser (the beam of which can be focused more
precisely than normal light) reflecting off the aluminum to a photocell receiver.
The advantages of CDs over magnetic tape are legion. Being digital, the information is
highly resistant to corruption. Being non-contact in operation, there is no wear incurred
through playing. Being optical, they are immune to magnetic fields (which can easily
corrupt data on magnetic tape or disks). It is possible to purchase CD "burner" driveswhich contain the high-power laser necessary to write to a blank disc.
Following on the heels of the music industry, the video entertainment industry has
leveraged the technology of optical storage with the introduction of the Digital Video
Disc, or DVD. Using a similar-sized plastic disc as the music CD, a DVD employs closer
spacing of pits to achieve much greater storage density. This increased density allows
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 16/36
feature-length movies to be encoded on DVD media, complete with trivia information
about the movie, director's notes, and so on.
Much effort is being directed toward the development of a practical read/write opticaldisc (CD-W). Success has been found in using chemical substances whose color may be
changed through exposure to bright laser light, then "read" by lower-intensity light. Theseoptical discs are immediately identified by their characteristically colored surfaces, as
opposed to the silver-colored underside of a standard CD.
How Virtual Memory Works
Virtual memory is a common part of most operating systems on desktop computers. It
has become so common because it provides a big benefit for users at a very low cost.
In this article, you will learn exactly what virtual memory is, what your computer uses it
for and how to configure it on your own machine to achieve optimal performance.
Most computers today have something like 32 or 64 megabytes of RAM available for the
CPU to use (see How RAM Works for details on RAM). Unfortunately, that amount of RAM is not enough to run all of the programs that most users expect to run at once.
For example, if you load the operating system, an e-mail program, a Web browser and
word processor into RAM simultaneously, 32 megabytes is not enough to hold it all. If
there were no such thing as virtual memory, then once you filled up the available RAMyour computer would have to say, "Sorry, you can not load any more applications. Please
close another application to load a new one." With virtual memory, what the computer
can do is look at RAM for areas that have not been used recently and copy them onto thehard disk . This frees up space in RAM to load the new application.
Because this copying happens automatically, you don't even know it is happening, and it
makes your computer feel like is has unlimited RAM space even though it only has 32
megabytes installed. Because hard disk space is so much cheaper than RAM chips, it alsohas a nice economic benefit.
The read/write speed of a hard drive is much slower than RAM, and the technology of a
hard drive is not geared toward accessing small pieces of data at a time. If your system
has to rely too heavily on virtual memory, you will notice a significant performance drop.
The key is to have enough RAM to handle everything you tend to work onsimultaneously -- then, the only time you "feel" the slowness of virtual memory is is
when there's a slight pause when you're changing tasks. When that's the case, virtual
memory is perfect.
When it is not the case, the operating system has to constantly swap information back and
forth between RAM and the hard disk. This is called thrashing, and it can make your
computer feel incredibly slow.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 17/36
The area of the hard disk that stores the RAM image is called a page file. It holds pages
of RAM on the hard disk, and the operating system moves data back and forth between
the page file and RAM. On a Windows machine, page files have a .SWP extension.
Next, we'll look at how to configure virtual memory on a computer.
Configuring Virtual Memory
Windows 98 is an example of a typical operating system that has virtual memory.
Windows 98 has an intelligent virtual memory manager that uses a default setting tohelp Windows allocate hard drive space for virtual memory as needed. For most
circumstances, this should meet your needs, but you may want to manually configure
virtual memory, especially if you have more than one physical hard drive or speed-critical applications.
To do this, open the "Control Panel" window and double-click on the "System" icon. Thesystem dialog window will open. Click on the "Performance" tab and then click on the
"Virtual Memory" button.
Click on the option that says, "Let me specify my own virtual memory settings." This
will make the options below that statement become active. Click on the drop-down list
beside "Hard disk:" to select the hard drive that you wish to configure virtual memory
for. Remember that a good rule of thumb is to equally split virtual memory between the physical hard disks you have.
In the "Minimum:" box, enter the smallest amount of hard drive space you wish to use for
virtual memory on the hard disk specified. The amounts are in megabytes. For the "C:"
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 18/36
drive, the minimum should be 2 megabytes. The "Maximum:" figure can be anything
you like, but one possible upper limit is twice physical RAM space. Windows default is
normally 12 megabytes above the amount of physical RAM in your computer. To put thenew settings into effect, close the dialog box and restart your computer.
The amount of hard drive space you allocate for virtual memory is important. If youallocate too little, you will get "Out of Memory" errors. If you find that you need to keep
increasing the size of the virtual memory, you probably are also finding that your systemis sluggish and accesses the hard drive constantly. In that case, you should consider
buying more RAM to keep the ratio between RAM and virtual memory about 2:1. Some
applications enjoy having lots of virtual memory space but do not access it very much. Inthat case, large paging files work well.
One trick that can improve the performance of virtual memory (especially when large
amounts of virtual memory are needed) is to make the minimum and maximum sizes of
the virtual memory file identical. This forces the operating system to allocate the entire
paging file when you start the machine. That keeps the paging file from having to growwhile programs are running, which improves performance. Many video applications
recommend this technique to avoid pauses while reading or writing video information between hard disk and tape.
Another factor in the performance of virtual memory is the location of the pagefile. If
your system has multiple physical hard drives (not multiple drive letters, but actual
drives), you can spread the work among them by making smaller pagefiles on each drive.This simple modification will significantly speed up any system that makes heavy use of
virtual memory.
For more information, check out the links on the next page.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 19/36
How Caching Works
Caching greatly increases the speed at which your computer pulls bits and bytes frommemory. See more computer memory pictures.
If you have been shopping for a computer , then you have heard the word "cache."Modern computers have both L1 and L2 caches, and many now also have L3 cache. You
may also have gotten advice on the topic from well-meaning friends, perhaps something
like "Don't buy that Celeron chip, it doesn't have any cache in it!"
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 20/36
It turns out that caching is an important computer-science process that appears on every
computer in a variety of forms. There are memory caches, hardware and software disk
caches, page caches and more. Virtual memory is even a form of caching. In this article,we will explore caching so you can understand why it is so important.
A Simple Example: Before Cache Caching is a technology based on the memory subsystem of your computer . The main purpose of a cache is to accelerate your computer while keeping the price of the computer
low. Caching allows you to do your computer tasks more rapidly.
To understand the basic idea behind a cache system, let's start with a super-simple
example that uses a librarian to demonstrate caching concepts. Let's imagine a librarian behind his desk. He is there to give you the books you ask for. For the sake of simplicity,
let's say you can't get the books yourself -- you have to ask the librarian for any book you
want to read, and he fetches it for you from a set of stacks in a storeroom (the library of congress in Washington, D.C., is set up this way). First, let's start with a librarian without
cache.
The first customer arrives. He asks for the book Moby Dick . The librarian goes into the
storeroom, gets the book, returns to the counter and gives the book to the customer. Later,the client comes back to return the book. The librarian takes the book and returns it to the
storeroom. He then returns to his counter waiting for another customer. Let's say the next
customer asks for Moby Dick (you saw it coming...). The librarian then has to return tothe storeroom to get the book he recently handled and give it to the client. Under this
model, the librarian has to make a complete round trip to fetch every book -- even very
popular ones that are requested frequently. Is there a way to improve the performance of the librarian?
Yes, there's a way -- we can put a cache on the librarian. In the next section, we'll look at
this same example but this time, the librarian will use a caching system.
A Simple Example: After Cache
Let's give the librarian a backpack into which he will be able to store 10 books (incomputer terms, the librarian now has a 10-book cache). In this backpack, he will put the
books the clients return to him, up to a maximum of 10. Let's use the prior example, butnow with our new-and-improved caching librarian.
The day starts. The backpack of the librarian is empty. Our first client arrives and asks for Moby Dick . No magic here -- the librarian has to go to the storeroom to get the book. He
gives it to the client. Later, the client returns and gives the book back to the librarian.
Instead of returning to the storeroom to return the book, the librarian puts the book in his backpack and stands there (he checks first to see if the bag is full -- more on that later).
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 21/36
Another client arrives and asks for Moby Dick . Before going to the storeroom, the
librarian checks to see if this title is in his backpack. He finds it! All he has to do is take
the book from the backpack and give it to the client. There's no journey into thestoreroom, so the client is served more efficiently.
What if the client asked for a title not in the cache (the backpack)? In this case, thelibrarian is less efficient with a cache than without one, because the librarian takes the
time to look for the book in his backpack first. One of the challenges of cache design is tominimize the impact of cache searches, and modern hardware has reduced this time delay
to practically zero. Even in our simple librarian example, the latency time (the waiting
time) of searching the cache is so small compared to the time to walk back to thestoreroom that it is irrelevant. The cache is small (10 books), and the time it takes to
notice a miss is only a tiny fraction of the time that a journey to the storeroom takes.
From this example you can see several important facts about caching:
•
Cache technology is the use of a faster but smaller memory type to accelerate aslower but larger memory type.
• When using a cache, you must check the cache to see if an item is in there. If it is
there, it's called a cache hit. If not, it is called a cache miss and the computer
must wait for a round trip from the larger, slower memory area.• A cache has some maximum size that is much smaller than the larger storage area.
• It is possible to have multiple layers of cache. With our librarian example, the
smaller but faster memory type is the backpack, and the storeroom represents thelarger and slower memory type. This is a one-level cache. There might be another
layer of cache consisting of a shelf that can hold 100 books behind the counter.
The librarian can check the backpack, then the shelf and then the storeroom. This
would be a two-level cache.
Computer Caches
A computer is a machine in which we measure time in very small increments. When the
microprocessor accesses the main memory (RAM), it does it in about 60 nanoseconds (60 billionths of a second). That's pretty fast, but it is much slower than the typical
microprocessor. Microprocessors can have cycle times as short as 2 nanoseconds, so to a
microprocessor 60 nanoseconds seems like an eternity.
What if we build a special memory bank in the motherboard, small but very fast (around30 nanoseconds)? That's already two times faster than the main memory access. That's
called a level 2 cache or an L2 cache. What if we build an even smaller but faster
memory system directly into the microprocessor's chip? That way, this memory will beaccessed at the speed of the microprocessor and not the speed of the memory bus. That's
an L1 cache, which on a 233-megahertz (MHz) Pentium is 3.5 times faster than the L2
cache, which is two times faster than the access to main memory.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 22/36
Some microprocessors have two levels of cache built right into the chip. In this case, the
motherboard cache -- the cache that exists between the microprocessor and main system
memory -- becomes level 3, or L3 cache.
There are a lot of subsystems in a computer; you can put cache between many of them to
improve performance. Here's an example. We have the microprocessor (the fastest thingin the computer). Then there's the L1 cache that caches the L2 cache that caches the main
memory which can be used (and is often used) as a cache for even slower peripherals likehard disks and CD-ROMs. The hard disks are also used to cache an even slower medium
-- your Internet connection.
Caching Subsystems
Your Internet connection is the slowest link in your computer. So your browser
(Internet Explorer, Netscape, Opera, etc.) uses the hard disk to store HTML pages,
putting them into a special folder on your disk. The first time you ask for an HTML page,your browser renders it and a copy of it is also stored on your disk. The next time you
request access to this page, your browser checks if the date of the file on the Internet is
newer than the one cached. If the date is the same, your browser uses the one on your
hard disk instead of downloading it from Internet. In this case, the smaller but faster memory system is your hard disk and the larger and slower one is the Internet.
Cache can also be built directly on peripherals. Modern hard disks come with fast
memory, around 512 kilobytes, hardwired to the hard disk. The computer doesn't directlyuse this memory -- the hard-disk controller does. For the computer, these memory chips
are the disk itself. When the computer asks for data from the hard disk, the hard-disk
controller checks into this memory before moving the mechanical parts of the hard disk (which is very slow compared to memory). If it finds the data that the computer asked for in the cache, it will return the data stored in the cache without actually accessing data on
the disk itself, saving a lot of time.
Here's an experiment you can try. Your computer caches your floppy drive with main
memory, and you can actually see it happening. Access a large file from your floppy --for example, open a 300-kilobyte text file in a text editor. The first time, you will see the
light on your floppy turning on, and you will wait. The floppy disk is extremely slow, so
it will take 20 seconds to load the file. Now, close the editor and open the same file again.The second time (don't wait 30 minutes or do a lot of disk access between the two tries)
you won't see the light turning on, and you won't wait. The operating system checked into
its memory cache for the floppy disk and found what it was looking for. So instead of waiting 20 seconds, the data was found in a memory subsystem much faster than when
you first tried it (one access to the floppy disk takes 120 milliseconds, while one access to
the main memory takes around 60 nanoseconds -- that's a lot faster). You could have runthe same test on your hard disk, but it's more evident on the floppy drive because it's so
slow.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 23/36
To give you the big picture of it all, here's a list of a normal caching system:
• L1 cache - Memory accesses at full microprocessor speed (10 nanoseconds, 4
kilobytes to 16 kilobytes in size)
• L2 cache - Memory access of type SRAM (around 20 to 30 nanoseconds, 128
kilobytes to 512 kilobytes in size)• Main memory - Memory access of type RAM (around 60 nanoseconds, 32
megabytes to 128 megabytes in size)• Hard disk - Mechanical, slow (around 12 milliseconds, 1 gigabyte to 10
gigabytes in size)
• Internet - Incredibly slow (between 1 second and 3 days, unlimited size)
As you can see, the L1 cache caches the L2 cache, which caches the main memory,
which can be used to cache the disk subsystems, and so on.
Cache TechnologyOne common question asked at this point is, "Why not make all of the computer'smemory run at the same speed as the L1 cache, so no caching would be required?" That
would work, but it would be incredibly expensive. The idea behind caching is to use a
small amount of expensive memory to speed up a large amount of slower, less-expensivememory.
In designing a computer, the goal is to allow the microprocessor to run at its full speed as
inexpensively as possible. A 500-MHz chip goes through 500 million cycles in one
second (one cycle every two nanoseconds). Without L1 and L2 caches, an access to the
main memory takes 60 nanoseconds, or about 30 wasted cycles accessing memory.
When you think about it, it is kind of incredible that such relatively tiny amounts of
memory can maximize the use of much larger amounts of memory. Think about a 256-
kilobyte L2 cache that caches 64 megabytes of RAM. In this case, 256,000 bytesefficiently caches 64,000,000 bytes. Why does that work?
In computer science, we have a theoretical concept called locality of reference. It means
that in a fairly large program, only small portions are ever used at any one time. As
strange as it may seem, locality of reference works for the huge majority of programs.Even if the executable is 10 megabytes in size, only a handful of bytes from that program
are in use at any one time, and their rate of repetition is very high. On the next page,you'll learn more about locality of reference.
Locality of Reference
Let's take a look at the following pseudo-code to see why locality of reference works (see
How C Programming Works to really get into it):
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 24/36
Output to screen « Enter a number between 1 and 100 »
Read input from user
Put value from user in variable X
Put value 100 in variable Y
Put value 1 in variable Z
Loop Y number of time
Divide Z by X
If the remainder of the division = 0
then output « Z is a multiple of X »
Add 1 to Z
Return to loop
End
This small program asks the user to enter a number between 1 and 100. It reads the value
entered by the user. Then, the program divides every number between 1 and 100 by thenumber entered by the user. It checks if the remainder is 0 (modulo division). If so, the
program outputs "Z is a multiple of X" (for example, 12 is a multiple of 6), for every
number between 1 and 100. Then the program ends.
Even if you don't know much about computer programming, it is easy to understand thatin the 11 lines of this program, the loop part (lines 7 to 9) are executed 100 times. All of
the other lines are executed only once. Lines 7 to 9 will run significantly faster because of
caching.
This program is very small and can easily fit entirely in the smallest of L1 caches, but
let's say this program is huge. The result remains the same. When you program, a lot of
action takes place inside loops. A word processor spends 95 percent of the time waiting
for your input and displaying it on the screen. This part of the word-processor program isin the cache.
This 95%-to-5% ratio (approximately) is what we call the locality of reference, and it's
why a cache works so efficiently. This is also why such a small cache can efficiently
cache such a large memory system. You can see why it's not worth it to construct acomputer with the fastest memory everywhere. We can deliver 95 percent of this
effectiveness for a fraction of the cost.
For more information on caching and related topics, check out the links on the next page.
Computer Memory Pictures
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 25/36
You'll find computer memory inside everyday gadgets such as cell phones, game
consoles, digital cameras and computers, and there are many different types of memory.
Take a look on the next pages to learn about the variety and functions of memory.
At the most basic level, computer memory starts with input from a source. It might beturning on your computer, using your mouse, saving a file or launching an application.From there, data can go into permanent or temporary storage. First, learn about types of
permanent memory storage on the next page.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 26/36
One type of permanent memory is read-only memory
(ROM), also known as firmware. It is an integrated circuit programmed with specific datawhen it is manufactured. The types of ROM include PROM (which can be programmed
once), EPROM (erasable and programmable many times), EEPROM (electrically
erasable read-only memory) and flash memory (a type of EEPROM that uses in-circuit
wiring to erase and rewrite data). See how EEPROM is used next.
The basic input-output system (BIOS) in computers uses flash memory to make sure
other chips and the CPU work together. The type of chip used is usually EEPROM.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 27/36
Removable storage is another type of
permanent memory. The floppy disk was the first. Using magnetic technology like a
cassette tape, these disks were made from a thin piece of plastic coated with a magneticmaterial on both sides that could be overwritten.
Optical storage replaced floppy disks as a permanent computer memory storage. The two
main types are CD-ROM and DVD-ROMs. A CD-ROM can store up to 1GB of data,
equal to about 700 floppy disks and a DVD-ROM can store between 4 and 16GB.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 28/36
Flash memory cards are another
popular removable storage device. Secure Digital (SD) cards are flash memory cards
frequently used in portable electronics. You can save files to an SD card and transfer them between devices or give them to someone else. See the next page to learn more.
Flash memory cards come in
several types: SD, SDHC and SDXC. The different types work in different devices, and
also have different storage capacity. Most cards today are SDHC, which ranges from4GB and 32GB of storage.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 29/36
In the case of the high-definition digital camcorder Handycam HDR-CX7, it records on
the flash memory stick Pro Duo. Flash memory sticks are often used for cameras and
camcorders. Learn about the NAND and NOR flash technology next.
Here is a 32-Gigabyte NAND memory card (for small electronics) and chip (for computer use). The NAND type is primarily used in flash storage devices, while the NOR type is for direct code execution. NOR has replaced the EEPROM chip in many digital
devices and is faster than NAND, but has less storage capacity. Learn how flash works
with USB ports on the next page.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 30/36
Pictured here is the nail-sized flash memory card "Pocket-bit mini." Like most flashdrives, it can connect to a USB port. Most computers today have a USB 2.0 or USB 3.0
port. USB 3.0 provides 10x the data transfer rate of its predecessor. Flash memory canalso be used as a hard drive. See the next page to learn more.
Fujitsu displays FMV-Lifebook FMV-Q8230, equipped with the company's first 32GB
flash memory drive (SSD) instead of a hard disk drive (HDD). However, the cost per gigabyte is higher than with a typical hard disk drive. Learn more about hard drives onthe next page.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 31/36
An external hard drive can connect to your computer via USB cable and provide
additional memory storage and backup for your files. Getting an external hard drive twice
the size of your computer hard drive allows for backups and provides room to expand.
For large amounts of music and video files, 500GB and up is good place to start.
Memory can also be freed up by saving files to a network server, which is commonly
used in larger offices and universities. Network storage and cloud storage also provide a backup if your hard drive fails on your personal computer. See the next page to learn
about temporary computer storage: RAM.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 32/36
A RAM memory chip is an
integrated circuit (IC) made of millions of transistors and capacitors. In the most commonform of computer memory, dynamic random access memory (DRAM), a transistor and a
capacitor are combines to create a memory cell, which represents a single bit of data.
RAM assists the operating system and provides temporary storage space. Pictured from
the top to bottom are SIMM, DIMM and SODIMM memory modules. SIMM stands for
single in-line memory module, which was replaced by DIMM (dual in-line memorymodule). Most laptops use the SODIMM because of the compact size.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 33/36
Every time you turn on a computer or open an application, RAM is hard at work. The
CPU requests the data it needs from RAM, processes it and writes new data back toRAM. In most computers, this shuffling of data happens millions of times every second.
RAM can easily be added to desktop or laptop computers to improve performance. If
your system responds slowly or accesses the hard drive constantly, then you need to add
more RAM. If you are running Windows XP, Microsoft recommends 128MB as theminimum RAM requirement.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 34/36
RAM capacity can be extended with virtual memory. With virtual memory, what thecomputer can do is look at RAM for areas that have not been used recently and copythem onto the hard disk. This frees up space in RAM to load the new application. In
addition to dynamic RAM, there is static RAM. Learn about what it does on the next
page.
Static RAM never has to be refreshed. This makes static RAM significantly faster than
dynamic RAM, but because it has more parts, a static memory cell takes a lot more spaceon a chip. See how static RAM is used next. Static RAM is used for caching, which
greatly increases the speed at which your computer pulls bits and bytes from memory.
Caching copies data from the most used main memory/RAM locations for quicker access.There is usually a level 1 and level 2 cache. See the next page to see how caching fits in
to the overall scheme of computer memory.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 35/36
As seen here, most computer data goes in random access memory (RAM) first. The CPUthen stores pieces of data it will need to access, often in a cache, and maintains certain
special instructions in the register. Learn more on the next page.
7/29/2019 Digital memory terms and concepts.doc
http://slidepdf.com/reader/full/digital-memory-terms-and-conceptsdoc 36/36
For those with a Windows operating system, the Windows registry is an enormous batch
of files containing information about almost everything that occurs on the computer, from
a visit to a Web site to a program installation. As it fills with information, the registry
may cause a computer's performance to suffer. Third-party registry-cleaner programs canhelp remove unneeded registry entries.