MTE 241 Final Review

26
MTE 241 Final Exam Review 7 Scheduling 7.1 Overview - schedule use of shared resources - process/thread that needs CPU makes request to CPU scheduler - scheduler chooses a ready thread of execution to use the CPU when it is free, moves running thread to ready/blocked state - scheduling policy determines when thread is removed from CPU and which ready thread gets CPU - running thread stops using CPU for four reasons: o thread completes execution, leaves system o thread requests resource that cannot be allocated, state changed to blocked and enqueued to waiting list o thread voluntarily releases CPU, returns to ready state o thread involuntarily releases CPU b/c of pre-emption 7.2 Scheduling Mechanisms - depends on features in hardware – most important is clock device - three logical parts of every scheduler: enqueuer, dispatcher, context switcher o enqueuer: when process is changed to ready state, enqueuer places pointer to process into ready process queue may compute priority when it is inserted only ready list or when considered for removal from ready list o context switcher: saves contents of all CPU registers (PC, IR, condition status, processor status, ALU status) of thread being removed from CPU when processes are switched o dispatcher: invoked after application process removed from CPU dispatcher’s context must be loaded to CPU in order for it to run selects one of ready threads from ready list, allocates CPU to thread by performing context switch btwn itself and selected thread

Transcript of MTE 241 Final Review

MTE 241 Final Exam Review

7 Scheduling

7.1 Overview- schedule use of shared resources- process/thread that needs CPU makes request to CPU scheduler- scheduler chooses a ready thread of execution to use the CPU when it is free, moves running

thread to ready/blocked state- scheduling policy determines when thread is removed from CPU and which ready thread gets

CPU- running thread stops using CPU for four reasons:

o thread completes execution, leaves systemo thread requests resource that cannot be allocated, state changed to blocked and

enqueued to waiting listo thread voluntarily releases CPU, returns to ready stateo thread involuntarily releases CPU b/c of pre-emption

7.2 Scheduling Mechanisms- depends on features in hardware – most important is clock device- three logical parts of every scheduler: enqueuer, dispatcher, context switcher

o enqueuer: when process is changed to ready state, enqueuer places pointer to process into ready process queue

may compute priority when it is inserted only ready list or when considered for removal from ready list

o context switcher: saves contents of all CPU registers (PC, IR, condition status, processor status, ALU status) of thread being removed from CPU when processes are switched

o dispatcher: invoked after application process removed from CPU dispatcher’s context must be loaded to CPU in order for it to run selects one of ready threads from ready list, allocates CPU to thread by

performing context switch btwn itself and selected thread

Context Switch Timing- context switching can significantly affect performance- each context switch requires (n+m)*b*K time units

o n = # of general registers in processoro m = # of status registers in processoro b = # of store operations required to save a single registero K = # time units required for each store instruction

- two pairs of context switches (4) occurs when processes are multiplexed (switched)

o original process context saved -> dispatcher context loaded -> dispatcher context saved -> ready process loaded

- “50ns to store 1 unit of information” = b*K = 50ns

Voluntary CPU Sharing- yield machine instruction allows process to release CPU- yield saves address of next instruction in designated memory location (not process’s stack) to be

executed then branches to arbitrary address

-

yield (r,s){memory[r] = PC;PC = memory[s];

}

- two processes use yield(*,r) & yield (*,s) to rotate between who has CPU- for more than two processes, yield(*,scheduler) is used so scheduler selects next process- scheduler w/ voluntary CPU sharing = non-pre-emptive scheduler- problem with yield: if a process does not periodically yield, all other processors are blocked

o problem for processor that is in an infinite loop without requesting resourceso solved with involuntary sharing

Involuntary CPU Sharing- interrupt system forces periodic interruption of any process’s execution- interrupt generated whenever interval timer expires- scheduler w/ involuntary CPU sharing = pre-emptive scheduler

Performance- related to how long process must wait when it becomes ready

o determined by scheduling policy & context switching time- starvation: process ignored by dispatcher, never receives CPU time

7.3 Strategy Selection- depends on goal of OS: process priorities, fairness, overall resource utilization, avg/max

turnaround time, avg/max response time, maximize system availability, deadlines- internal priority – determines rank order dispatcher uses to select executing process

o can be determined by dynamic circumstances: time thread has been waiting, closeness of its deadline

- equality goal: for each of n ready processes ready in K time units, process gets K/n time units w/ CPU

o as wait time ↑, priority ↑.o when using CPU, priority ↓

o

- for involuntary CPU sharing, there is known time quantum (maximum)/timeslice lengtho time quantum = amt of time btwn interval timer interruptso timeslice may be less if it is blocked on resource & next process must get full timeslice,

so scheduler must reset interrupt timer- optimal schedule

o for pre-emptive schedulers onlyo can be computed if no new processes enter ready list while those in ready list are servedo scheduler may use more time computing optimal schedule then actually servicing

threads

Scheduling Model- P = {pi|0<= i < n}- P = set of modern processes, pi = each process, n = # of processes- Service time τ (p i , j)=time amount thread needs to be running before completed

- Wait time W ( p i , j)=time thread spends waiting in ready state before first transition to running state

- Turnaround time T TRnd ( p i , j)=time btwn first transition to ready state & exit time- turnaround time is most critical performance metric

7.4 Nonpreemptive Strategies- allow any thread to run to completion once it has CPU- ignoring context switching time, ρ=λ/ μ

o p = fraction of time CPU is busyo λ = mean arrival rate of new processes into ready listo μ = mean service rate, # threads per minuteo if λ>μ ,ρ>1, CPU is saturated and ready list will overflowo if λ<μ ,ρ<1, system reaches steady state o system with p ->1 need large ready lists

- FCFS (First-come-first-served)o alternatively, ready list organized in FIFO data structureo easy to implement, ignores service time request & other performance influencing

criteriao generally does not perform well

- Shortest Job Next (SJN)o chooses thread requiring min. service time as highest priority jobo minimizes avg wait timeo penalizes threads with high service time requests – may cause starvationo very high CPU utilization, multiple processes on ready list at all times

- Priority Schedulingo threads allocated to CPU on basis of externally assigned priority

o may cause low-priority threads to starve – can be solved w/ dynamic priorities- Deadline Scheduling

o thread has recurring service time & deadlineo performance measure based on ability to meet deadlines, not turnaround & wait timeo process only admitted to ready list if scheduler can guarantee it can supply specified

service time before each deadline for all processes can be meto one type: earliest deadline first scheduling

7.5 Preemptive Strategies- highest-priority thread allocated CPU- if high priority thread becomes ready, can interrupt currently executing thread- pre-emptive versions of non-preemptive strategies exist, but always keep highest-priority job

running- tend to have more context switches & more overhead than non-preemptive systems- Round Robin (RR)

o goal: equitable distribution of processing time among all threads requesting CPUo When thread done executing, next thread gets new time quantum instead of finishing

old oneo new process on ready queue is serviced after all processes already on queue – avg of

n/2 time slices before it gets CPUo After each interrupt, resource manager is called and releases resources to threads on

blocked list- Multiple level queue

o priority queues

8 Basic Synchronization Principles

8.1 Cooperating Processes- synchronization is act of ensuring independent threads begin executing blocks of code at same

logical time- for useful concurrency, threads must be able to share information, yet not interfere with each

other during critical sections of execution- critical sections occur in software when threads access common shared variable- without concurrency, concurrent execution of threads not guaranteed to be determinate- race condition – outcome of computation depends on relative times that processes execute

their critical sections- enableInterrupt()/disableInterrupt()

o used after & before critical sections, can cause problems if there is infinite loop inside critical section

o user processes cannot invoke these functions

- lock variable – shared memory that is set & unset by concurrent programs

Deadlock- 2 or more threads get into state where each is controlling resource the other needs

Resource Sharing- shared variables are shared resources with mutually exclusive access- when one thread has resource, others cannot access it

8.2 Evolving from the Classic Solution- fork(),join(),quit() can be used to synchronize concurrent computation- join() is used to synchronize two processes- process creation/destruction is quite costly- three basic approaches to synchronization:

o use only user mode software algorithms and shared variableso disable/enable interrupts around critical sectionso incorporate specialized mechanisms in hardware/OS

8.3 Semaphores- requirements for critical section problem:

o only one process at a time allowed to be executing in its critical section (mutual exclusion)

o if critical section is free & more than one process wants to enter critical section, process to enter is chosen by collection of processes, not external agent

o waiting process cannot be blocked for indefinite period of timeo once process attempts to enter critical section, cannot be forced to wait for more than

bounded # of other processes to enter critical section- semaphore, s, is nonnegative integer variable- V(s): [s = s + 1]- P(s): [while (s==0) {wait}; s = s -1]- V(s) & P(s) are atomic operations- if s = 0, process executing P can be interrupted when it is waiting in while loop- usually semaphore initialized to s = 1- interrupts enabled only when semaphore is being manipulated

Bounded Buffer Problem- occurs regularly in concurrent software- one process is producer, other is consumer of information- producer obtains empty buffer from empty buffer pool, fills it with info, places it in full buffer

pool, consumer picks up buffer from full buffer pool, places buffer into empty buffer pool for recycling

- illustrates counting semaphore – has values from 0 to N- P(empty) blocks producer if there are no empty buffers

- P(full) blocks consumer if there are no full buffers

Readers-Writer Problem- reader process can share resource with any other reader process, but not writers- writer process needs exclusive access to resource when it acquires any access to resource- as long as reader holds resource & there are new readers arriving, writer must wait for resource

to be completely available- race condition for first reader w/ writer- only first reader executes P(writeBlock), last reader performs V operation- stream of readers can block out writer

Test-and-set- accomplishes effects of P&V in modern hardware- TS is a single machine instruction- “TS R3, m” loads register R3, tests value, writes TRUE to memory location m- shortcoming: can only use binary semaphores

Busy-wait- code repeatedly executes loop testing variable until it switches – wastes CPU time- can use yield(*,scheduler) inside while loop to stop wasting timeslice- active V uses yield, passive V is inside while loop wasting CPU time

9 High-level Synchronization and Interprocess Communication

9.1 Alternative Synchronization Primitives

AND/ simultaneous P Synchronization- 2+ variables must be accessed by a process- P (semaphore1, semaphore2, ….)- trying to access one semaphore before the other may cause deadlock- P_sim just calls P(semaphore1); P(semaphore2);

Events- abstraction of semaphore operations- represented by system data structure: event descriptor/event control block- three member functions: wait(), signal(), queue()

o wait() – blocks calling thread until another thread performs signal()o signal() – resumes one waiting thread suspended by wait() o queue() – returns # of processes currently waiting o if no one is waiting, nothing happens on signal()

9.2 Monitors- abstract data type – only one thread can execute any of its member functions at one time

- member function execution treated like a critical section- waiting process should temporarily relinquish monitor to prevent deadlock- condition variable is global to all procedures within monitor, can be manipulated by:

o wait()o signal(): condition must be rechecked b/c situation may have changedo queue()

- no context switch occurs until signaling thread voluntarily vacates monitor

9.3 Interprocess Communication- OS assists threads in different processes in sharing information- OS copies info from sending process’s address space into receiving process’s address space

The Pipe Model- pipe: FIFO buffer implemented in kernel- has read end & write end, each treated as a file reference- thread that knows file reference for write end can call write()- thread that knows file reference for read end can call read() to remove data from pipe- limitation of pipes is providing process w/ file reference- only processes that can use pipes are parent & child processes created after pipe creation

(anonymous pipes)- named pipes: process obtains pipe end by using string analogous to file name

o allows processes to exchange info using “public pipes”o must be managed b/c potentially accessible by any process

- simple model

Message Passing Mechanisms- Sender processes composes message as block of formatted info- OS copies message from sender’s process address space to receiver’s address space- sender must request OS to deliver the message, as addresses in separate address spaces can

only be done in supervisor mode- OS transmits message in a few copy operations

o retrieves message from sender’s address spaceo puts message into OS buffero copies message from buffer to receiver’s address space

Mailboxes- need mailbox b/c receiver may not know it has message- OS stores incoming messages in mailbox buffer before copying them into receiver’s address

space- receiver must explicitly ask for message w/ receive operation- receive call can be library routine instead of OS function, but some parts of mailbox may be

overwritten by accident

- mailboxes usually implemented in operating system, requiring OS to allocate memory space for each application’s mailbox

Message Protocols- protocol between sender and receiver – both agree on message format- header for message identifies various pieces of info relating to message

o sending process’s identificationo receiving process’s identificationo # of bytes of info being transmitted in body of messageo message type

Send Operation- asynchronous send()

o delivers message to receiver’s mailboxo sender continues operation without waiting for receiver to read messageo sender does not even know if receiver retrieves message

- synchronous send()o incorporates synchronization w/ info transfero blocks sending process until msg has been successfully receivedo weak form: process resumes execution after message has been deliveredo strong form: sender blocked until message is retrievedo weak form does not provide synchronization, but provides reliable message

transmission

Receive Operation- blocking receive()

o if no message in mailbox, process suspends operation until message placed in mailboxo when mailbox empty, receiver’s operation synchronized with sender processo analogous to resource request

- unblocking receive()o queries mailboxo returns control to calling process immediately w/ or without message

Deferred Message Copying- copying can be major performance bottleneck- copy-on-write optimization

o reduces number of times message is copiedo rather than copying info to & from address spaces, OS constructs pointer in mailbox

area to buffered infoo OS copies pointer rather than whole message into receiver’s address spaceo copies info to private part of receiver’s address spaceo each has its own copy, effect of write not perceived by each other

10 Deadlock

Prevention- For deadlock to occur, the following four conditions must hold at the same time:- Mutual exclusion:

o threads in process have exclusive use of resource once it has been allocated to them- Hold and wait:

o process holds resource at same time it requests another one- Circular waiting:

o P1 has R1, needs R2. P2 has R2, needs R1o may have more than 2 processes in circular wait

- No pre-emption:o resources can only be released by explicit action in process, not action of external

authorityo process cannot withdraw its request

Avoidance- rely on resource manager’s ability to predict effect of satisfying individual allocation requests- avoidance strategies refuse request if it can lead to deadlock

Detection and Recovery- some systems have no avoidance and system checks for deadlock instead periodically/ when

system is slow- detection phase: system checked to see if deadlock currently exists- recovery phase: resources are pre-empted from processes- nonpreemption condition is violated and selected process destroyed

Manual Deadlock Management- when deadlock occurs, up to user/operator of system to detect it- recovery means something dramatic, like rebooting computer

10.2 System Deadlock Model- any process might cause state transition depending on if it

o requests a resourceo is allocated a resourceo deallocates a resource

- blocked process cannot change state of system- if any process deadlocked in system state sk, sk is called deadlock state

10.3 Prevention- prevention strategies make sure at least one of four conditions (mutual exclusion, hold and wait,

circular wait, no pre-emption) are always false- mutual exclusion must be true all the time, so prevention focuses on other three conditions

- Hold and Wait:o 1: can require process to request all of its resources when it is created, not one at a timeo 2: can require process to release all currently held resources before requesting new

oneso first approach can cause poor utilization of resources – resources more difficult to

obtain, jobs may starveo state-transition model: process requests all resources it needs in one stage

- Circular Waito establish a total order on all resources in a system & only allow process to acquire

resource with index number greater than indexes of all other resources currently held by process

o must include consumable and reusable resourceso if process requests resources with lower index than resource it currently holds, must

release all higher order index resources, acquire lower index resource, and reacquire higher order index resources – increases time process has to wait for resource

- Allowing Preemptiono OS allows process to “back out of” a resource request if resource not availableo requesting process repeatedly polls resource manager until resource is availableo process pre-empts its request, is never blocked on requests, and returns to prev state o no guarantee technique is effectiveo may cause livelock: processes cause transitions that are not effective in long term

10.4 Avoidance- analyze prospective state before entering it to guarantee any sequence of transitions will not

cause deadlock- process must declare maximum claim when created - # of units it will ever request & of which

resource type- system is always kept in a safe state- as long as processes tend to use less than max claim, system likely to be in safe state- strategy is to determine if all programs require their max claim, still a way in which requests of

all processes are eventually satisfied- eventually all requests will be serviced, not simultaneously- unsafe does not mean deadlock, but that matter is “out of hands” of resource manager- as long as state is safe, resource manager can avoid deadlock

The Banker’s Algorithm

- avail [ j ]=c j−∑i=0

n

alloc [i , j ]

- avail[j] = # of available units of resource Rj

- cj = # of units of each resource in system- alloc[i,j] = # of units of resource Rj currently allocated to process pi

- If process suddenly requests all its max claim resources, are there enough resources to satisfy the request? If no-> safe state

o modeled by if all units of resource held by process to avail vector, can other processes exercise their max claim?

- iteratively determines if any process can have max claim meto 1: copy alloc[i,j] into table named alloc’o 2: given C, maxc, and alloc’, compute avail vector

avail[j] = cj – alloc’[*,j]o 3: find pi such that maxc[i,j] – alloc’[i,j] <= avail[j] for 0<=j<m & 0<=i<n

If no such pi exists, state is unsafe, algorithm halted If alloc’[i,j] = 0 for all I and j, safe, algorithm halted

o 4: set alloc’[i,j] = 0 to indicate pi can exercise max claim deallocate all resources, representing pi is not permanently blocked in analyzed

state Go back to Step 2

10.5 Detection and Recovery- resource manager far more aggressive than in avoidance- ignores distinction between safe and unsafe states- detection algorithms don’t make predictions about future sates that can be reached- algorithms determine if any sequence of transitions will allow every process to become

unblocked

Serially Reusable Resources- finite constant number of identical units- each unit can only be allocated to one single process at any time- unit can only be released if it was previously acquired- lots of resource graph with squares and circles…

Consumable Resources- process never releases reusable resources- process can release units of consumable resource without ever acquiring them- e.g. signal, message, input data- number of units of consumable resource is not constant- one or more producer processes that may increase # of units by releasing units of resource- consumer processes decrease # of units by acquiring them

Recovery- once deadlock detected, system must change to state w/ no deadlocked processes- done by pre-empting processes & releasing their resources so that other deadlocked processes

are unblocked- operator may destroy processes until system operates again- check point/rollback mechanism – process periodically takes snapshot of its current state

o OS saves checkpoint, process continues activityo if process involved in deadlock, destroys process & re-establishes state at checkpoint

- after process destroyed, deadlock detection algorithm runs again to see if recovery successful

11 Memory Management- primary memory (a.k.a. executable memory) holds info while being used by CPU

o referenced one byte at a timeo relatively fast access timeo volatile

- secondary memory = collection of storage deviceso referenced as blocks of byteso slow access timeo persistent (opposite of volatile)

- programs & info kept in primary memory only when being used by CPU, then restored to secondary memory

- memory manager = resource manager for primary memoryo allocates blocks of primary memory to processeso automatically transfers info between primary & secondary memory using virtual

memory

11.1 The Basics

Storage Hierarchies- von Neumann architecture = storage hierarchy- von Neumann employs at least three levels of memory

o highest: CPU register memoryo middle: primary executable memoryo lowest: secondary memory

- CPU can access primary memory with single load/store instruction – takes a few clock cycles- Secondary memory access involves action by driver & physical device – 3 orders of magnitude

more time- modern computers have more levels

o CPU Registerso Cache Memory (Primary)o RAM memory (Primary)o Rotating Magnetic Memory (Secondary)o Optical Memory (Secondary)o Sequentially Accessed Memory (Secondary)

Memory Manager- exploit storage hierarchies

- when CPU updates records, copies fields from primary memory to CPU registers, modifies info in registers, copies back to primary memory, record later written back to secondary memory.

o once lower-level copy saved, higher-level copy destroyed- modern memory manager automatically moves info, no need for read/write files- classic memory manager:

o Abstraction: primary memory abstracted into a large array of contiguously addressed bytes

abstract set of addresses used to reference physical primary memory loc.o Allocation: process can request exclusive use of memory blocko Isolation: process is assured exclusive use of contiguously addressed block of bytes it is

allocatedo Sharing: isolation mechanism bypassed so 2+ processes can share block of memory

11.2 The Address Space Abstraction - memory manager assigns each process a set of logical primary addresses used to read/write

locations of physical primary memory- logical primary memory of process = address space

Managing the Address Space- compile time: program is translated to produce relocatable object module- link time: collection of relocatable modules combined using linkage editor, absolute module

produced- organization of absolute module defines process’s address space- system loader places absolute program into block of primary memory allocated to process by

memory manager- system loader binds logical addresses to physical addresses

Compile Time- relocatable object has three logical blocks of addresses:

o code segment: block of machine instructionso data segment: block of static variableso stack segment: stack used when program executed

- static variables – compiler references it using relative address within data segment- automatic variable – references relative to bottom of stack

Link Time- link editor combines all data segments into one & all code segments into one- editor relocates addresses, updated addresses are referenced- absolute module stored in a file in secondary memory until process executes it

Load Time- address binding: loader translates each internal logical primary memory address to refer it to

physical primary memory address

11.3 Memory Allocation- allocates memory using space-multiplexed sharing- fragmentation: small memory fragments that cannot be used b/c memory manager unable to

allocate parts in efficient manner

Static Memory Allocation- memory manager allocates every single byte of memory to a process if it needs memory- memory becomes fragmented

Fixed-Partition- primary memory statically divided into fixed-size partitions of different size- process’s address space required to be <= size of allocated partition- internal fragmentation: some space is allocated to process but not mapped into its address

space- best fit: process allocated region with space that best fits its needs- worst fit: process allocated region with space that worst fits its needs- first fit: process allocated first available region it finds that is large enough – saves time used to

traverse free list- next fit: process allocated next region after currently occupied one – free list converted into

circular list

Variable-Partition- regions dynamically defined according to instantaneous space needs of process- removes possibility of internal fragmentation- memory manager keeps track of memory block size to allocate them efficiently- small amount of memory at end of memory space lost to external fragmentation- if external fragmentation occurs in middle of memory space, memory blocks are moved to

create single block of unallocated memory at end of memory space

Contemporary Allocation- memory usually allocated in fixed-size blocks (pages)- all allocatable units are same size- each time program grows/shrinks, loader must rebind each address in program to new primary

memory location

11.4 Dynamic Address Space Binding- static address binding: relative addres in relocatable module -> address in absolute module -

>primary memory address- every address bound to primary memory prior to runtime- dynamic relocation: enables memory manager to move program around in memory without

need for adjusting addresseso use of relative addresses (offset) so load module addresses do not need to be changed

- relocation register: loaded w/ first address in primary memory block assigned to address space

o changed each time different process allocated CPU- OS given complete freedom to choose location executable images loaded into primary memory- multiple segment relocation register: relative address = code register + stack register + data

register primary memory address registero code segment, stack segment and data segment can all be relocated by CPU

Runtime Bound Checking- each relocation register has limit register

o loaded with length of memory segment addressed by relocation registero if address being sent to primary memory is less than value of limit register, address

refers to location within memory segmento else refers to part of primary memory not allocated to process currently using CPU

(segment violation) – causes interrupt

11.5 Modern Memory Manager Strategies

Swapping- used in systems w/ only one thread per process- attempt to optimize system performance by removing process from primary memory when

thread blocked, deallocating memory, allocating memory to other processes- memory must be reacquired & reloaded when thread is ready again- executable image can be simply copied to secondary memory and back to newly allocated

sections b/c of relocation register- good for timesharing systems- key observation: if process not going to use CPU for relatively long time, should release

allocated primary memory

Virtual Memory- allow process to use CPU when only part of its address space is loaded in primary memory- process’s address space partitioned into parts that can be loaded into primary memory when

needed- natural implicit partitions: e.g. code, data, stack segments- special locality: set of addresses used during phase of computation, changes when phase

changes

Shared-Memory Multiprocessors- several processors share interconnection network to access set of shared-memory modules- goal: use classic processes/threads to implement computation where info is shared via common

primary memory locations - first block of primary memory for process 1 = last block of primary memory for process 2- address split into private & shared part using multiple relocation-limit registers

12 Virtual Memory- memory manager copies portions of process’s address space into primary memory when

process is referencing info, then info updated in secondary memory and removed from primary memory

- programs use virtual address space like it should use primary memory

12.2 Address Translation- distinguish between symbolic name, virtual address, & physical address spaces

Address Space Mapping- source program components represented w/ symbolic identifiers – elements of program’s name

space- each symbolic name in name space translated into virtual address during link time- each virtual address converted to physical primary memory address during load time- mapping order: name space (@ source program) -> process’s virtual address space (@ absolute

program) -> physical address space (@ executable image)- when thread ref. part of virtual address space not currently loaded in primary memory,

execution is suspended & missing information is loaded from secondary memory – missing information interruption

- missing information interruption:o 1. virtual memory manager interrupts process executiono 2. referenced info retrieved from secondary memory, loaded into primary mem

location, ko 3. manager updates address translation that was previously missingo 4. manager enables program to continue execution

- size(virtual address space) > size(physical address space)

Segmentation- extension of relocation-limit register use for bounds checking- parts loaded defined as variable-sized segments- virtual address space divided into set of memory segments- virtual address is ordered pair, <segmentNumber, offset> where offset defines byte within

segment

Paging- single-component addresses- entire virtual address space is one linear sequence of virtual addresses (not hierarchial)- single block of virtual address divided into collection of equal-sized pages- unit of memory that is moved between primary & secondary memory is a page- page boundaries transparent to programmer- memory manager operates without prior knowledge regarding page relationships

Segmentation vs. Paging- segmentation: provides programmer explicit control over units of transfer- segmentation: requires more effort unless segments automatically generated- segments: can be more efficient since programmer can specify set of virtual address locations to

be used at same time- segments: memory system has harder time placing variable-sized segments, cause external

fragmentation- segmentation better but harder to use & implement

12.3 Paging- fixed size unit of virtual address space transferred when needed to execute program- each page has same number of locations- only small amount of internal fragmentation- program translation bound to virtual address space- program only needs to use a subset of all its pages at given time- paging system goal: identify set of pages needed for process’s current locality, load only those

pages into page frames in primary memory

*must complete chapter 12 blargh!!!!*

13 File Management

13.2 Files- most application programs read info from files, processes, data, and writes results into one or

more fileso stdin – file abstraction of input deviceo stdout – file abstraction of normal output deviceo stderr – file abstraction of error log

- file manager provides abstraction & protection mechanism - file manager provides manual mechanism for storing/retrieving info to/from storage devices- virtual memory paging & files are different abstractions of secondary memory- filenames of all files accessible from any address space, virtual memory contents only available

to process associated with them- stream-block translation/marshalling and unmarshalling: abstraction links blocks of storage

system together to form logical collection of information - low-level file system: OS provides only stream-block translations

o data structure is flattened into byte stream when written into deviceo when data retrieved, read block-by-block and unmarshalled to app-level data structure

- structured/high-level file system: record-stream translation provided

Low-Level Files- byte-stream file: named sequence of bytes indexed by non-negative integers- process that open files uses file position to reference byte in file- read/writing k bytes advances file position by k bytes- open(filename):

o filename: character string that uniquely identifies fileo prepares file for reading/writingo causes file descriptor to reflect file being put into useo modes can be set, such as “open for reading, not writing”

- close(fileID):o deallocates internal descriptors created by open()

- read (fileID, buffer, length):o copies block of length bytes into buffer of fileID. o increments file position by number of bytes read, returns that numbero end-of-file condition returned if at end of file when read() called

- write(fileID,buffer,length):o writes length bytes of information from buffer to current file positiono increments file position by length

- seek(fileID, filePosition):o changes value of file position to filePosition

Structured Files- structured sequential file: named sequence of logical records

o indexed by nonnegative integerso access to file defined by file positiono records are indexed instead of bytes

- open(filename)- close(fileID)- getRecord(fileID,record): returns record addressed by file position- putRecord(fileID,record): writes designated record at current position- seek(fileID,position): moves file position to point at designated record- k bytes allocated to contain each record- large records must be fragmented when stored- small data records waste space

13.3 Low-level File Implementations- file manager implements stream-block translation- mapping of logical to physical blocks not normally to contiguous blocks in RAM- file descriptor: created by file manager when file created

o stores detailed info about fileo kept on storage device w/ contents of file

o information kept: external name, sharable, owner, protection settings, length, time of creation, time of last modification, time of last access, reference count, storage device details

o