11 Chapter 4: Memory Instructor: Hengming Zou, Ph.D. In Pursuit of Absolute Simplicity.
-
Upload
morgan-lambeth -
Category
Documents
-
view
219 -
download
0
Transcript of 11 Chapter 4: Memory Instructor: Hengming Zou, Ph.D. In Pursuit of Absolute Simplicity.
11
Chapter 4: MemoryInstructor: Hengming Zou, Ph.D.
In Pursuit of Absolute Simplicity 求于至简,归于永恒
22
Content
Basic memory management
Swapping
Virtual memory
Page replacement algorithms
Implementation issues
Segmentation
33
Memory Management
Ideally programmers want memory that is– large
– fast
– non volatile
44
Memory Hierarchy
Small amount of fast, expensive memory – – cache
Some medium-speed, medium price memory– main memory
Gigabytes of slow, cheap memory– disk storage
55
Virtual Memory
An illusion provided to the applications
An address space can be larger than – the amount of physical memory on the machine
– This is called virtual memory
Memory manager handles the memory hierarchy
66
Basic Memory Management
Mono-programming– without Swapping or Paging
Multi-programming with Fixed Partitions– Swapping may be used
77
Uni-programming
One process runs at a time– One process occupies memory at a time
Always load process into the same memory spot
And reserve some space for the OS
88
Uni-programming
Three simple ways of organizing memory for– an operating system with one user process
Operating systemIn ROM
Userprogram
0xFFF…
0
Operating systemIn RAM
Userprogram
Operating systemIn ROM
UserProgram
Operating systemIn RAM
99
Uni-programming
Achieves address independence by – Loading process into same physical memory location
Problems with uni-programming?– Load processes in its entirety (no enough space?)
– Waste resource (both CPU and Memory)
1010
Multi-programming
More than one process is in memory at a time
Need to support address translation– Address from instruction may not be the final address
Need to support protection– Each process cannot access other processes’ space
1111
Multiprogramming with Fixed Partitions
Two options exist for fixed memory partitions
Separate input queues for each partition– Incoming processes are allocated into fixed partition
Single input queue– Incoming processes can go to any partition
– Assume partition is big enough to hold the processes
1212
Multiprogramming with Fixed Partitions
1313
Benefit of Multiprogramming
Improve resource utilization– All resource can be kept busy with proper care
Improve response time– Don’t need to wait for previous process to finish to receive a
feedback from computer
1414
CPU Utilization of Multiprogramming
Assume processes spend 80% time wait for I/O
CPU utilization for uni-programming: 20%
CPU utilization for two processes: 36%– Rough estimation only (i.e. ignore overhead)
– CPU utilization = 1 - 0.8×0.8=0.36
CPU utilization for three processes: 48.8%
1515
Degree of multiprogramming
CPU Utilization of Multiprogramming
1616
Multiprogramming System Performance
Given: – Arrival and work requirements of 4 jobs
– CPU utilization for 1 – 4 jobs with 80% I/O wait
Plot:– Sequence of events as jobs arrive and finish
– Show amount of CPU time jobs get in each interval
1717
Multiprogramming System Performance
1818
Multiprogramming Issues
Cannot be sure where program will be loaded– Address locations of variables
– Code routines cannot be absolute
Solution:– Use base and limit values
– Address added to base value to map to physical addr
– Address locations larger than limit value is an error
1919
Multiprogramming Issues
Protection processes from each other– Must keep a program out of other processes’ partitions
Solution:– Address translation
– Must translate addresses issued by a process so they don’t conflict with addresses issued by other processes
2020
Address Translation
Static address translation– Translate addresses before execution
– Translation remains constant during execution
Dynamic address translation– Translate addresses during execution
– Translation may change during execution
2121
Address Translation
Is it possible to: – Run two processes at the same time (both in memory), and
provide address independence with only static address translation?
Does this achieve the other address space abstractions?– No (i.e. does not offer address protection)
2222
Address Translation
Achieving all the address space abstractions (protection and independence) requires doing some work on every memory reference
Solution:– Dynamic address translation
2323
Dynamic Address Translation
Translate every memory reference from virtual address to physical address
Virtual address:– An address viewed by the user process
– The abstraction provided by the OS
Physical address– An address viewed by the physical memory
2424
Dynamic Address Translation
UserProcess
Translator(MMU)
Physicalmemory
Virtualaddress
Physicaladdress
2525
Benefit of Dynamic Translation
Enforces protection– One process can’t even refer to another process’s address
space
Enables virtual memory– A virtual address only needs to be in physical memory when
it’s being accessed
– Change translations on the fly as different virtual addresses occupy physical memory
2626
Dynamic Address Translation
Does dynamic address translation require hardware support?
– It’s better to have but not absolutely necessary
2727
Implement Translator
Lots of ways to implement the translator
Tradeoffs among:– Flexibility (e.g. sharing, growth, virtual memory)
– Size of translation data
– Speed of translation
2828
Base and Bounds
The simplest solution
Load each process into contiguous regions of physical memory
Prevent each process from accessing data outside its region
2929
Base and Bounds
if (virtual address > bound) {
trap to kernel; kill process (core dump)
} else {
physical address = virtual address + base
}
3030
Base and Bounds
Process has illusion of running on its own dedicated machine with memory [0, bound)
Physical memory
0
base
base + bound
physical memory size
virtual memory
0
bound
3131
Base and Bounds
This is similar to linker-loader– But also protect processes from each other
Only kernel can change base and bounds
During context switch, must change all translation data (base and bounds registers)
3232
Base and Bounds
What to do when address space grows?
3333
Pros of Base and Bounds
Low hardware cost– 2 registers, adder, comparator
Low overhead– Add and compare on each memory reference
3434
Cons of Base and Bounds
Hard for a single address space to be larger than physical memory
But sum of all address spaces can be larger than physical memory
– Swap an entire address space out to disk
– Swap address space for new process in
3535
Cons of Base and Bounds
Can’t share part of an address space between processes
data (P2)
data (P1)
code
Physical memory
data (P1)
code
data (P2)
code
virtual memory virtual memory
virtual address(process 1)
virtual address(process 2)
Does this work under base and bound?
3636
Cons of Base and Bounds
Solution:
use 2 sets of base and bounds
one for code section
one for data section
3737
Swapping
Memory allocation changes as:
Processes come into memory
Processes leave memory
Take processes out from memory and bring process into memory are called “Swapping”
3838
Swapping
3939
Swapping
Problem with previous situation?
Difficult to grow process space– i.e. stack, data, etc.
Solution:– Allocating space for growing data segment
– Allocating space for growing stack & data segment
4040
Swapping
4141
Memory Mgmt Issues
Need keep track of memory used and available
Bit map approach
Linked list approach
4242
Memory Mgmt with Bit
Divide memory into allocation units
Use one bit to mark if the unit is allocated or not
4343
Memory Mgmt with Bit
Divide memory into allocation units
Use a linked list to indicate allocated and available units in chunks
4444
Memory Mgmt with Bit Maps
Memory allocation
Bit-map representation
c) Linked list representation
4545
Memory Mgmt with Linked Lists
What happens when units are released?
May need to collapse linked list appropriately
4646
Memory Mgmt with Linked Lists
4747
External Fragmentation
Processes come and go
Can leave a mishmash of available mem regions
Some regions may be too small to be of any use
4848
External Fragmentation
P1 start:100 KB (phys. mem. 0-99 KB)
P2 start:200 KB (phys. mem. 100-299 KB)
P3 start:300 KB (phys. mem. 300-599 KB)
P4 start:400 KB (phys. mem. 600-999 KB)
P3 exits (frees phys. mem. 300-599 KB)
P5 start:100 KB (phys. mem. 300-399 KB)
P1 exits (frees phys. mem. 0-99 KB)
P6 start:300 KB
4949
External Fragmentation
300 KB are free (400-599 KB; 0-99 KB)– but not contiguous
This is called “external fragmentation”– wasted memory between allocated regions
Can waste lots of memory
5050
Strategies to Minimize Fragmentation
Best fit: – Allocate the smallest memory region that can satisfy the
request (least amount of wasted space)
First fit: – Allocate the memory region that you find first that can satisfy
the request
5151
Strategies to Minimize Fragmentation
In worst case, must re-allocate existing memory regions– by copying them to another area
5252
Problems of Fragmentation
Hard to grow address space– Might have to move to different region of physical memory
(which is slow)
How to extend more than one contiguous data structure in virtual memory?
5353
Paging
Allocate physical memory in terms of fixed-size chunks of memory (called pages)
– fixed unit makes it easier to allocate
– any free physical page can store any virtual page
Virtual address– virtual page # (high bits of address, e.g. bits 31-12)
– offset (low bits of addr, e.g. bits 11-0 for 4 KB page)
5454
Paging
Processes access memory by virtual addresses
Each virtual memory reference is translated into physical memory reference by the MMU
5555
Paging
5656
Paging
Page translation process:
if (virtual page is invalid or non-resident or protected) {
trap to OS fault handler
} else {
physical page # =
pageTable[virtual page #].physPageNum
}
5757
Paging
What must be changed on a context switch?– Page table, registers, cache image
– Possibly memory images
Each virtual page can be in physical memory or paged out to disk
5858
Paging
How does processor know that virtual page is not in physical memory?
– Through a valid/invalid bit in page table
Pages can have different protections– e.g. read, write, execute
– These information is also kept in page table
6161
Page Table
Used to keep track of virtual-physical page map
One entry for each virtual page
Also keep information concerning other relevant information such read, write, execute, valid, etc.
MMU use it to perform addresses translation
6262
Page Table
Typical page table entry
6363
Paging
The relation between virtual addresses and physical memory addresses given by page table
6464
Paging
The internal operation of MMU with 16 4 KB pages
6565
Paging Pros and Cons
+ simple memory allocation
+ can share lots of small pieces of an addr space
+ easy to grow the address space– Simply add a virtual page to the page table
– and find a free physical page to hold the virtual page before accessing it
6666
Paging Pros and Cons
Problems with paging?
The size of page table could be enormous
Take 32 bits virtual address for example
Assume the size of page is 4KB
Then there are 65536 virtual pages
For a 64 bit virtual address?
6767
Paging Pros and Cons
The solution?
Use multi-level translation!
Break page tables into 2 or more levels
Top-level page table always reside in memory
Second-level page tables in memory as needed
6868
Multi-level Translation
Standard page table is a simple array– one degree of indirection
Multi-level translation changes this into a tree – multiple degrees of indirection
6969
Multi-level Translation
Example: two-level page table
Index into the level 1 page table using virtual address bits 31-22
Index into the level 2 page table using virtual address bits 21-12
Page offset: bits 11-0 (4 KB page)
7070
Multi-le
vel
Multi-le
vel
Tra
nsla
tion
Tra
nsla
tion
7171
Multi-level Translation
What info is stored in the level 1 page table?– Information concerning secondary-level page tables
What info is stored in the level 2 page table?– Virtual-to-physical page mappings
7272
Multi-level Translation
This is a two-level tree
Virtual address bits 21-12
Physical page #
0 10
1 15
2 20
3 2
0 1 2 3Level 1
Page table
Level 2Page table
Virtual address bits 21-12
Physical page #
0 10
1 15
2 20
3 2
NULL NULL
7373
Multi-level Translation
How does this allow the translation data to take less space?
How to use share memory when using multi-level page tables?
What must be changed on a context switch?
7575
Multi-level Translation
Pros and cons
+ space-efficient for sparse address spaces
+ easy memory allocation
+ lots of ways to share memory
- two extra lookups per memory reference
7676
Inverted Page Table
An alternate solution to big table size problem
Rather than storing virtual-physical mapping
We store physical-virtual mapping
This significantly reduce the page table size
7777
Inverted Page Tables
7878
Comparing Basic Translation Schemes
Base and bound: – unit (and swapping) is an entire address space
Segments: unit (and swapping) is a segment– a few large, variable-sized segments per addr space
Page: unit (and swapping/paging) is a page– lots of small, fixed-sized pages per address space
– How to modify paging to take less space?
7979
Translation Speed
Translation when using paging involves 1 or more additional memory references
– This can be a big issue if not taking care of
How to speed up the translation process?
Solution: – Translation look-aside buffer
8080
Translation Look-aside Buffer
Facility to speed up memory access
Abbreviated as TLB
TLB caches translation from virtual page # to physical page #
TLB conceptually caches the entire page table entry, e.g. dirty bit, reference bit, protection
8181
Translation Look-aside Buffer
If TLB contains the entry you’re looking for– can skip all the translation steps above
On TLB miss, figure out the translation by – getting the user’s page table entry,
– store in the TLB, then restart the instruction
8282
A TLB to Speed Up Paging
8383
Translation Look-aside Buffer
Does this change what happens on a context switch?
8484
Replacement
One design dimension in virtual memory is– which page to replace when you need a free page?
Goal is to reduce the number of page faults– i.e. a page to be accessed is not in memory
8585
Replacement
Modified page must first be saved– unmodified just overwritten
Better not to choose an often used page– will probably need to be brought back in soon
8686
Replacement Algorithms
Random replacement
Optimal replacement
NRU (not recently used) replacement
FIFO (first in first out) replacement
Second chance replacement
8787
Replacement Algorithms
LRU (least recently used) replacement
Clock replacement
Work set replacement
Work set clock replacement
8888
Random Replacement
Randomly pick a page to replace
Easy to implement, but poor results
8989
Optimal Replacement
Replace page needed at farthest point in future– i.e. page that won’t be used for the longest time
– this yields the minimum number of misses
– but requires knowledge of the future
Forecast future is difficult if at all possible
9090
NRU Replacement
Replace page not recently used
Each page has Reference bit, Modified bit– bits are set when page is referenced, modified
9191
NRU Replacement
Pages are classified into four classes:– not referenced, not modified
– not referenced, modified
– referenced, not modified
– referenced, modified
NRU removes page at random– from lowest numbered non empty class
9292
FIFO Replacement
Replace the page that was brought into memory the longest time ago
Maintain a linked list of all pages – in order they came into memory
Page at beginning of list replaced
9393
FIFO Replacement
Unfortunately, this can replace popular pages that are brought into memory a long time ago (and used frequently since then)
9494
Second Chance Algorithm
A modification to FIFO
Just as FIFO but page evicted only if R bit is 0
If R bit is 1, the page is put behind the list – And the R bit is cleared
i.e. page brought in the longest time ago but with R bit set is given a second chance
9595
Second Chance Algorithm
Page list if fault occurs at time 20, A has R bit set (numbers above pages are loading times)
9696
LRU Replacement
LRU stands for Least Recently Used
Use past references to predict the future– temporal locality
If a page hasn’t been used for a long time– it probably won’t be used again for a long time
9797
LRU Replacement
LRU is an approximation to OPT
Can we approximate LRU to make it easier to implement without increasing miss rate too much?
Basic idea is to replace an old page– not necessarily the oldest page
9898
LRU Replacement
Must keep a linked list of pages– most recently used at front, least at rear
– update this list every memory reference !!
Alternatively use counter in each page table entry– choose page with lowest value counter
– periodically zero the counter
9999
Implementing LRU with Matrix
Another option is to use n×n matrix– Here n is the number of pages in virtual space
The matrix is set to zero initially
Whenever a page k is referenced:– Row k is to all one, then column k is set to all zero
Whenever need to pick a page to evict– Pick the one with the smallest number (row value)
100100
Implementing LRU with MatrixPages referenced in order 0,1,2,3,2,1,0,3,2,3
101101
Implementing LRU with Aging
Each page corresponding to a shift register– Shift register is initially set to zero
At every clock tick, the value of the shift is shifted one bit left, and the R bit is added to the left most bit of the corresponding shifter
Whenever need to pick a page to evict– Pick the one with the smallest number
102102
Implementing LRU with Aging
103103
The Clock Algorithm
Maintain “referenced” bit for each resident page– set automatically when the page is referenced
Reference bit can be cleared by OS
The resident page organized into a clock cycle
A clock hand points to one of the pages
104104
The Clock Algorithm
To find a page to evict:– look at page being pointed to by clock hand
reference=0 means page hasn’t been accessed in a long time (since last sweep)
reference=1 means page has been accessed since your last sweep. What to do?
105105
The Clock Algorithm
106106
The Clock Algorithm
Can this infinite loop?
What if it finds all pages referenced since the last sweep?
New pages are put behind the clock hand, with reference=1
108108
The Working Set Algorithm
The working set is the set of pages used by the k most recent memory references
w(k,t) is the size of the working set at time, t
109109
The Working Set Algorithm
110110
The Working Set Algorithm
Work set changes as time passes but stabilizes after
k (most recent references)
111111
The Work Set Clock Algorithm
Combine work set algorithm with clock algorithm
Pages organized into a clock cycle
Each page has a time of last use and R bit
Whenever needs to evict a page:– Inspect from the page pointed by the clock hand
– The first page that with 0 R bit and is outside the work set is evicted
112112
113113
Page Replacement Algorithm Review
114114
Design Issues for Paging Systems
Thrashing
Local versus Global Allocation Policies
Page size
OS Involvement with Paging
Page fault handling
115115
Thrashing
What happens when a work set is big than the available memory frames?
Thrashing!
i.e. constant page fault to bring pages in and out
Should avoid thrashing at all cost
116116
Local versus Global Allocation
When evict a page, do we only look at pages of the same process for possible eviction
– Local allocation policy
Or do we look at the whole memory for victim?– Global allocation policy
117117
Local versus Global Allocation
Localpolicy
Globalpolicy
118118
Local versus Global Allocation
In global allocation policy, can use PFF to manage the allocation
– PFF page fault frequency
If PFF is large, allocate more memory frames
Otherwise, decrease the number of frames
Goal is to maintain an acceptable PFF
119119
Page fault rate as a function of # of page frames assigned
Local versus Global Allocation
120120
Page Size
What happens if page size is small?
What happens if page size is really big?
Could we use a large page size but let other processes use the leftover space in the page?
Page size is typically a compromise– e.g. 4 KB or 8 KB
121121
Page Size
What happens to paging if the virtual address space is sparse?
– most of the address space is invalid,
– with scattered valid regions
122122
Small Page Size
Advantages– less internal fragmentation
– better fit for various data structures, code sections
– less unused program in memory
Disadvantages– programs need many pages, larger page tables
123123
Page Size
Therefore, to decide a good page size, one needs to balance page table size and internal fragmentation
124124
Page Size
Overhead due to page table and internal fragmentation can be calculate as:
2
p
p
esoverhead
s = average process size in bytesp = page size in bytese = page entry
Page table space
Internal fragmentation
125125
Page Size
Overhead is minimized when:
esp 2
126126
Fixed vs. Variable Size Partitions
Fixed size (pages) must be compromise– too small a size leads to a large translation table
– too large a size leads to internal fragmentation
127127
Fixed vs. Variable Size Partitions
Variable size (segments) can adapt to the need– but it’s hard to pack these variable size partitions into physical
memory
– leading to external fragmentation
128128
Load Control
Despite good designs, system may still thrash
When PFF algorithm indicates: – some processes need more memory
– but no processes need less
129129
Load Control
Solution :
Reduce # of processes competing for memory
swap 1 or more to disk, divide up pages they held
reconsider degree of multiprogramming
130130
Separate Instruction and Data Spaces
With combined instruction and data space, programmers have to fit everything into 1 space
By separating instruction and data space, we:– Allows programmers more freedom
– Facility sharing of program text (code)
131131
Separate Instruction and Data Spaces
132132
OS Involvement with Paging
Four times when OS involved with paging
133133
OS Involvement with Paging
Process creation– determine program size
– create page table
Process execution– MMU reset for new process
– TLB flushed
134134
OS Involvement with Paging
Page fault time– determine the virtual address that causes the fault
– swap target page out, needed page in
Process termination time– release page table, pages
135135
Page Fault Handling
Hardware traps to kernel
General registers saved
OS determines which virtual page needed
OS checks validity of address, seeks page frame
If selected frame is dirty, write it to disk
136136
Page Fault Handling
OS brings schedules new page in from disk
Page tables updated
Faulting instruction backed up to when it began
Faulting process scheduled
Registers restored
Program continues
138138
Locking Pages in Memory
Sometimes may need to lock a page in memory– i.e. prohibit its eviction from memory
139139
Locking Pages in Memory
Proc issues call for read from device into buffer– while waiting for I/O, another processes starts up
– has a page fault
– buffer for the first proc may be chosen to be paged out
Need to specify some pages locked– exempted from being target pages
140140
Backing Store
When paged out, where does it go on disk?
Two options:
A special designated area: swap area
Or the normal place of the program
141141
Backing Store
When use swap area, there are two options:
Allocate swap area for entire process– Do this at loading time before execution
Allocate swap area for part of the process that is currently paged out to disk
– Load process into memory first, then as pages get paged out, allocate swap area then
142142
Backing Store
(a) Paging to a static area (b) Back up pages dynamically
148148
Page Out
What to do with page when it’s evicted?– i.e. do we write it back to disk or simply discard?
Why not write pages to disk on every store?– Cost CPU time to do this
149149
Page Out
While evicted page is being written to disk, the page being brought into memory must wait
May be able to reduce total work by giving preference to dirty pages
– e.g. could evict clean pages before dirty pages
If system is idle, might spend time profitably by writing back dirty pages
150150
Page Table Contents
Data stored in the hardware page table:
Resident bit: – true if the virtual page is in physical memory
Physical page # (if in physical memory)
Dirty bit: – set by MMU when page is written
151151
Page Table Contents
Reference bit: – set by MMU when page is read or written
Protection bits (readable, writable)– set by operating system to control access to page
– Checked by hardware on each access
152152
Page Table Contents
Does the hardware page table need to store the disk block # for non-resident virtual pages?
Really need hardware to maintain a “dirty” bit?
How to reduce # of faults required to do this?
Do we really need hardware to maintain a “reference” bit?
153153
MMU
Memory management unit
MMU is responsible for checking:– if the page is resident
– if the page protections allow this access
– and setting the dirty/reference bits
154154
MMU
If page is resident and access is allowed, – MMU translates virtual address into physical address
– using info from the TLB and page table
Then MMU issues the physical memory address to the memory controller
155155
MMU
If page is not resident, or protection bits disallow the access– the MMU generates an exception (page fault)
156156
Segmentation
In a paging system, each process occupies one virtual address space
This may be inconvenient since different sections of the process can grow or shrink independently
157157
Segmentation
One-dimensional address space with growing tables
One table may bump into another
158158
Segmentation
The solution is segmentation!
159159
Segmentation
Segment: a region of contiguous memory space
Segmentation divides both physical and virtual memory into segments
Each segment is dedicated to one or more sections of a process
The pure segmentation use entire process
160160
Segmentation
Allows each table to grow or shrink, independently
161161
Segmentation
Segment # Base Bound Description
0 4000 700 Code segment
1 0 500 Data segment
2 Unused
3 2000 1000 Stack segment
Let’s generalize this to allow multiple segments– described by a table of base & bound pairs
162162
Segmentation
data
stack
code
code
data
stack
Physical memory
Virtual memorysegment 1
Virtual memorysegment 3
Virtual memorysegment 0
6ff
0
4ff
0
fff
046ff
4000
2fff
2000
4ff
0
163163
Segmentation
Note that not all virtual addresses are valid– e.g. no valid data in segment 2;
– no valid data in segment 1 above 4ff
Valid means the region is part of the process’s virtual address space
164164
Segmentation
Invalid means this virtual address is illegal for the process to access
Accesses to invalid address will cause the OS to take corrective measures
– usually a core dump
165165
Segmentation
Protection: – different segments can have different protection
– e.g. code can be read-only (allows inst. fetch, load)
– e.g. data is read/write (allows fetch, load, store)
In contrast, base & bounds gives same protection to entire address space
166166
Segmentation
In segmentation, a virtual address takes the form:– (virtual segment #, offset)
Could specify virtual segment # via – The high bits of the address,
– Or a special register,
– Or implicit to the instruction opcode
167167
Implementation of Pure Segmentation
168168
Segmentation
What must be changed on a context switch?
169169
Pros and Cons of Segmentation
+ works well for sparse address spaces– with big gaps of invalid areas
+ easy to share whole segments without sharing entire address space
- complex memory allocation
170170
Compare Paging and Segmentation
Consideration Paging Segmentation
Need programmer aware that this technique is being used
No Yes
How many linear address spaces? 1 Many
Can total address space exceed the size of physical memory
Yes Yes
Can procedures and data be distinguished & separately protected
No Yes
Can tables size fluctuate easily? No Yes
Sharing of procedures between users?
No Yes
Why was this technique invented To get a large linear address space without buying more memory
Allow programs & data to be broken up into logically independent spaces and to aid sharing & protection
171171
Segmentation
Can a single address space be larger than physical memory?
How to make memory allocation easy and – allow an address space be larger than physical memory?
172172
Segmentation with Paging
173173
Segmentation with Paging
Descriptor segment points to page tables
Page tables points to physical frames
MULTICS use this method
174174
SP Example: MULTICS
A 34-bit MULTICS virtual address
Segment number
Address withinthe segment
18
Pagenumber
Offset withinthe page
6 10
175175
SP Example: MULTICS
Segment number
MULTICS virtual space
18
Pagenumber
Offset withinthe page
6 10
Descriptor Page frameWord
Segmentnumber
Pagetable
Page
Pagenumber
Descriptorsegment
offset
176176
SP Example: MULTICS TLB
178178
SP Example: Pentium
Pentium virtual memory contains two tables:
Global Descriptor Table:– Describes system segments, including OS
Local Descriptor Table:– Describes segments local to each program
179179
SP Example: Pentium
Pentium selector contains a bit to indicate if the segment is local or global
LDT or GDT entry numbers
1 213Bits
0 = GDT/1 = LDT Privilege level (0-3)
A Pentium selector
180180
SP Example: Pentium
Pentium code segment descriptor (Data segments differ slightly)
181181
SP Example: Pentium
Conversion of a (selector, offset) pair to a linear address
182182
Pentium Address Mapping
183183
Protection on the Pentium
Level
Computer Changes Life