ss os

download ss os

of 33

description

basic about system software and operating system

Transcript of ss os

1. Memory Management with Linked Lists

Four neighbor combinations for the terminating process XAlgorithms for allocating memory when linked list management is used-1. FIRST FIT - allocates the first hole found that is large enough - fast (as little searching as possible).2. NEXT FIT - almost the same as First Fit except that it keeps track of where it last allocated space and starts from there instead of from the beginning - slightly better performance.3. BEST FIT - searches the entire list looking for a hole that is closest to the size needed by the process - slow - also does not improve resource utilization because it tends to leave many very small ( and therefore useless) holes.4. WORST FIT - the opposite of Best Fit - chooses the largest available hole and breaks off a hole that is large enough to be useful (I.e. hold another process) - in practice has not been shown to work better than others.2. What is an Operating System? An operating system is a program that acts as an interface between the user and the computer hardware and controls the execution of all kinds of programs. Operating System is a software, which makes a computer to actually work. It is the software the enables all the programs we use. The OS organizes and controls the hardware. OS acts as an interface between the application programs and the machine hardware. Examples: Windows, Linux, Unix and Mac OS, etc., OS is a resource allocator-Manages all resources, Decides between conflicting requests for efficient and fair resource use. OS is a control program-Controls execution of programs to prevent errors and improper use of the computer.

Operating System Services-Operating systems provide an environment for execution of programs and services to programs and users.One set of operating-system services provides functions that are helpful to the user:1. User interface - Almost all operating systems have a user interface (UI).Varies between Command-Line (CLI), Graphics User Interface (GUI), Batch2. Program execution - The system must be able to load a program into memory and to run that program, end execution, either normally or abnormally (indicating error)3. I/O operations - A running program may require I/O, which may involve a file or an I/O device4. File-system manipulation - The file system is of particular interest. Programs need to read and write files and directories, create and delete them, search them, list file Information, permission management.5. Communications Processes may exchange information, on the same computer or between computers over a network. Communications may be via shared memory or through message passing (packets moved by the OS)6. Error detection OS needs to be constantly aware of possible errors May occur in the CPU and memory hardware, in I/O devices, in user program For each type of error, OS should take the appropriate action to ensure correct and consistent computing Debugging facilities can greatly enhance the users and programmers abilities to efficiently use the systemAnother set of OS functions exists for ensuring the efficient operation of the system itself via resource sharing-7. Resource allocation - When multiple users or multiple jobs running concurrently, resources must be allocated to each of them Many types of resources - Some (such as CPU cycles, main memory, and file storage) may have special allocation code, others (such as I/O devices) may have general request and release code8. Accounting - To keep track of which users use how much and what kinds of computer resources9. Protection and security - The owners of information stored in a multiuser or networked computer system may want to control use of that information, concurrent processes should not interfere with each other Protection involves ensuring that all access to system resources is controlled Security of the system from outsiders requires user authentication, extends to defending external I/O devices from invalid access attemptsDetail-1) User Operating System Interface CLICommand Line Interface (CLI) or command interpreter allows direct command entry Sometimes implemented in kernel, sometimes by systems program Sometimes multiple flavors implemented shells Primarily fetches a command from user and executes it- Sometimes commands built-in, sometimes just names of programs -If the latter, adding new features doesnt require shell modification. 2)User Operating System Interface GUI User-friendly desktop metaphor interface-1. Usually mouse, keyboard, and monitor2. Icons represent files, programs, actions, etc3. Various mouse buttons over objects in the interface cause various actions (provide information, options, execute function, open directory (known as a folder)4. Invented at Xerox PARC Many systems now include both CLI and GUI interfaces1. Microsoft Windows is GUI with CLI command shell2. Apple Mac OS X as Aqua GUI interface with UNIX kernel underneath and shells available3. Solaris is CLI with optional GUI interfaces (Java Desktop, KDE).

3.Time Sharing Systems: Time sharing, or multitasking, is a logical extension of multiprogramming. Multiple jobs are executed by switching the CPU between them. In this, the CPU time is shared by different processes, so it is called as Time sharing Systems. Time slice is defined by the OS, for sharing CPU time between processes. Examples: Multics, UNIX, etc.4.Batch Processing: In Batch processing same type of jobs batch (BATCH- a set of jobs with similar needs) together and execute at a time. The OS was simple, its major task was to transfer control from one job to the next. The job was submitted to the computer operator in form of punch cards. At some later time the output appeared. The OS was always resident in memory. (Ref. Fig. next slide) Common Input devices were card readers and tape drives. Common output devices were line printers, tape drives, and card punches. 5.Disk Scheduling: The operating system is responsible for using hardware efficiently for the disk drives, this means having a fast access time and disk bandwidth Minimize seek time Seek time seek distance Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer There are many sources of disk I/O request-OS, System processes, Users processes I/O request includes input or output mode, disk address, memory address, number of sectors to transfer OS maintains queue of requests, per disk or device Several algorithms exist to schedule the servicing of disk I/O requests The analysis is true for one or many platters We illustrate scheduling algorithms with a request queue (0-199)98, 183, 37, 122, 14, 124, 65, 67

6.Multiprogramming: Multiprogramming is a technique to execute number of programs simultaneously by a single processor. In Multiprogramming, number of processes reside in main memory at a time. The OS picks and begins to executes one of the jobs in the main memory. If any I/O wait happened in a process, then CPU switches from that job to another job. Hence CPU in not idle at any time.OS

Job 1

Job 2

Job 3

Job 4

Job 5

Figure dipicts the layout of multiprogramming system. The main memory consists of 5 jobs at a time, the CPU executes one by one.Advantages: Efficient memory utilization Throughput increases CPU is never idle, so performance increases.

7.Race Condition- counter++ could be implemented as register1 = counter register1 = register1 + 1 counter = register1 counter-- could be implemented as register2 = counter register2 = register2 - 1 count = register2 Consider this execution interleaving with count = 5 initially: S0: producer execute register1 = counter {register1 = 5} S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = counter {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute counter = register1 {count = 6 } S5: consumer execute counter = register2 {count = 4}

8.Critical Section Problem: Consider system of n processes {p0, p1, pn-1} Each process has critical section segment of code-1. Process may be changing common variables, updating table, writing file, etc2. When one process in critical section, no other may be in its critical section Critical section problem is to design protocol to solve this Each process must ask permission to enter critical section in entry section, may follow critical section with exit section, then remainder section Especially challenging with preemptive kernels General structure of process pi is

Solution to Critical-Section Problem-1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the n processes.9.Readers-Writers Problem/one synchronisation problem explain: A data set is shared among a number of concurrent processes Readers only read the data set; they do not perform any updates Writers can both read and write Problem allow multiple readers to read at the same time Only one single writer can access the shared data at the same time Several variations of how readers and writers are treated all involve priorities Shared Data-Data set, Semaphore mutex initialized to 1, Semaphore wrt initialized to 1,Integer read count initialized to 0 The structure of a writer process- do { wait (wrt) ; // writing is performed signal (wrt) ; } while (TRUE); The structure of a reader process do { wait (mutex) ; readcount ++ ; if (readcount == 1) wait (wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if (readcount == 0) signal (wrt) ; signal (mutex) ; } while (TRUE);

10. Paging vs Segmentation:

Sl. No.PagingSegmentation

1.Transparent to programmer (system allocates memory)Involves programmer (allocates memory to specific function inside code)

2.No separate protectionSeparate protection

3.No separate compilingSeparate compiling

4.No shared codeShared code

5.Length-fixedVariable

6.Add space-1 dimensional add2 dim add

7.Linking-staticDynamic

8.Fragment-internalexternal

11.Difference between Process and ThreadS.N.ProcessThread

1Process is heavy weight or resource intensive.Thread is light weight taking lesser resources than a process.

2Process switching needs interaction with operating system.Thread switching does not need to interact with operating system.

3In multiple processing environments each process executes the same code but has its own memory and file resources.All threads can share same set of open files, child processes.

4If one process is blocked then no other process can execute until the first process is unblocked.While one thread is blocked and waiting, second thread in the same task can run.

5Multiple processes without using threads use more resources.Multiple threaded processes use fewer resources.

6In multiple processes each process operates independently of the others.One thread can read, write or change another thread's data.

12.Difference between User Level & Kernel Level ThreadS.N.User Level ThreadsKernel Level Thread

1User level threads are faster to create and manage.Kernel level threads are slower to create and manage.

2Implementation is by a thread library at the user level.Operating system supports creation of Kernel threads.

3User level thread is generic and can run on any operating system.Kernel level thread is specific to the operating system.

4Multi-threaded application cannot take advantage of multiprocessing.Kernel routines themselves can be multithreaded.

13.FragmentationAs processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes can not be allocated to memory blocks considering their small size and memory blocks remains unused. This problem is known as Fragmentation.Fragmentation is of two typesS.N.FragmentationDescription

1External fragmentationTotal memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous so it can not be used.

2Internal fragmentationMemory block assigned to process is bigger. Some portion of memory is left unused as it can not be used by another process.

External fragmentation can be reduced by compaction or shuffle memory contents to place all free memory together in one large block. To make compaction feasible, relocation should be dynamic.13.MultiprogrammingIn a multiprogramming system there are one or more programs loaded in main memory which are ready to execute. Only one program at a time is able to get the CPU for executing its instructions (i.e., there is at most one process running on the system) while all the others are waiting their turn.The main idea of multiprogramming is to maximize the use of CPU time. Indeed, suppose the currently running process is performing an I/O task (which, by definition, does not need the CPU to be accomplished). Then, the OS may interrupt that process and give the control to one of the other in-main-memory programs that are ready to execute. In this way, no CPU time is wasted by the system waiting for the I/O task to be completed, and a running process keeps executing until either it voluntarily releases the CPU or when it blocks for an I/O operation. Therefore, the ultimate goal of multiprogramming is to keep the CPU busy as long as there are processes ready to execute.Note that in order for such a system to function properly, the OS must be able to load multiple programs into separate areas of the main memory and provide the required protection to avoid the chance of one process being modified by another one. Other problems that need to be addressed when having multiple programs in memory is fragmentation as programs enter or leave the main memory. Another issue that needs to be handled as well is that large programs may not fit at once in memory which can be solved by using pagination and virtual memory. Please, refer to this article for more details on that.Finally, note that if there are N ready processes and all of those are highly CPU-bound (i.e., they mostly execute CPU tasks and none or very few I/O operations), in the very worst case one program might wait all the other N-1 ones to complete before executing.MultiprocessingMultiprocessing sometimes refers to executing multiple processes (programs) at the same time. This might be misleading because we have already introduced the term multiprogramming to describe that before.In fact, multiprocessing refers to the hardware (i.e., the CPU units) rather than the software (i.e., running processes). If the underlying hardware provides more than one processor then that is multiprocessing. Several variations on the basic scheme exist, e.g., multiple cores on one die or multiple dies in one package or multiple packages in one system.Anyway, a system can be both multiprogrammed by having multiple programs running at the same time and multiprocessing by having more than one physical processor.

14.Linux components:Linux is one of popular version of UNIX operating System. It is open source as its source code is freely available. It is free to use. Linux was designed considering UNIX compatibility. It's functionality list is quite similar to that of UNIX.

Components of Linux SystemLinux Operating System has primarily three components Kernel - Kernel is the core part of Linux. It is responsible for all major activities of this operating system. It is consists of various modules and it interacts directly with the underlying hardware. Kernel provides the required abstraction to hide low level hardware details to system or application programs. System Library - System libraries are special functions or programs using which application programs or system utilities accesses Kernel's features. These libraries implements most of the functionalities of the operating system and do not requires kernel module's code access rights. System Utility - System Utility programs are responsible to do specialized, individual level tasks.

Basic FeaturesFollowing are some of the important features of Linux Operating System. Portable - Portability means softwares can works on different types of hardwares in same way.Linux kernel and application programs supports their installation on any kind of hardware platform. Open Source - Linux source code is freely available and it is community based development project. Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving. Multi-User - Linux is a multiuser system means multiple users can access system resources like memory/ ram/ application programs at same time. Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time. Hierarchical File System - Linux provides a standard file structure in which system files/ user files are arranged. Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system. It can be used to do various types of operations, call application programs etc. Security - Linux provides user security using authentication features like password protection/ controlled access to specific files/ encryption of data.Architecture

Linux System Architecture is consists of following layers Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc). Kernel - Core component of Operating System, interacts directly with hardware, provides low level services to upper layer components. Shell - An interface to kernel, hiding complexity of kernel's functions from users. Takes commands from user and executes kernel's functions. Utilities - Utility programs giving user most of the functionalities of an operating systems.15.System and Application:Comparison

1)The system software helps in operating the computer hardware, and provides a platform for running the application software. Application software helps the user in performing single or multiple related computing tasks.

2)System software executes in a self-created environment. Application software executes in the environment created by the system software.3)It executes continuously as long as the computer system is running. It executes as and when the user requires.4)The programming of system software is complex, requiring the knowledge of the working of the underlying hardware. The programming of an application software is relatively easier, and requires only the knowledge of the underlying system software.5)There are much fewer system software as compared to application software. There are many more application software as compared to system software.6)System software runs in the background and the users typically do not interact with it. The application software run in the foreground, and the users interact with it frequently for all their computing needs.7)System software can function independent of the application software. The application software depends on the system software and cannot run without it.8)Examples: Windows OS, BIOS, device firmware, Mac OS X, Linux etc. Windows Media Player, Adobe Photoshop, World of Warcraft (game), iTunes, MySQL etc.

17.Deadlocks: It is two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes.System Model- 1)Resource types R1, R2, . . ., Rm .CPU cycles, memory space, I/O devices2)Each resource type Ri has Wi instances.3)Each process utilizes a resource as follows: request, use ,release.Deadlock Characterization- Deadlock can arise if four conditions hold simultaneously- Mutual exclusion: only one process at a time can use a resource Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task Circular wait: there exists a set {P0, P1, , Pn} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, , Pn1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.Deadlock Prevention- Restrain the ways request can be made-Mutual Exclusion not required for sharable resources; must hold for nonsharable resourcesHold and Wait must guarantee that whenever a process requests a resource, it does not hold any other resources:1) Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none2) Low resource utilization; starvation possible.No Preemption 1)If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released2)Preempted resources are added to the list of resources for which the process is waiting3)Process will be restarted only when it can regain its old resources, as well as the new ones that it is requestingCircular Wait impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration.Deadlock Avoidance- Requires that the system has some additional a priori information available-1)Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need.2)The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition.3)Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes.Resource-Allocation Graph and Wait-for Graph-

a) Resource-Allocation Graph b) Corresponding wait-for graphDetection Algorithm-1.Let Work and Finish be vectors of length m and n, respectively Initialize:(a) Work = Available(b)For i = 1,2, , n, if Allocationi 0, then Finish[i] = false; otherwise, Finish[i] = true2.Find an index i such that both:(a)Finish[i] == false (b)Requesti WorkIf no such i exists, go to step 4 3.Work = Work + AllocationiFinish[i] = truego to step 24.If Finish[i] == false, for some i, 1 i n, then the system is in deadlock state. Moreover, if Finish[i] == false, then Pi is deadlockedExample of Detection Algorithm-1)Five processes P0 through P4; three resource types A (7 instances), B (2 instances), and C (6 instances)2)Snapshot at time T0: AllocationRequestAvailableA B C A B C A B C P0 0 1 0 0 0 0 0 0 0 P1 2 0 0 2 0 2 P2 3 0 3 00 0 P32 1 1 1 0 0 P40 0 2 0 0 23)Sequence will result in Finish[i] = true for all i 4)P2 requests an additional instance of type C Request A B C P00 0 0 P12 0 2 P20 0 1 P31 0 0 P40 0 2

State of system?1)Can reclaim resources held by process P0, but insufficient resources to fulfill other processes; requests2)Deadlock exists, consisting of processes P1, P2, P3, and P4 18.Thrashing-1)If a process does not have enough pages, the page-fault rate is very high.Page fault to get page,Replace existing frame,But quickly need replaced frame backThis leads to: Low CPU utilization Operating system thinking that it needs to increase the degree of multiprogramming Another process added to the system2)Thrashing a process is busy swapping pages in and out .3)Graph-

19.Parser-

TopDown Parsing A parse tree is created from root to leaves The traversal of parse trees is a preorder traversal Tracing leftmost derivation Two types:Backtracking parser,Predictive parserBottom up parsing A parse tree is created from leaves to root The traversal of parse trees is a reversal of postorder traversal Tracing rightmost derivation More powerful than top-down parsing21.Semaphore- Synchronization tool that does not require busy waiting Semaphore S integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() Less complicated Can only be accessed via two indivisible (atomic) operations wait (S) { while S value--; if (S->value < 0) { add this process to S->list; block(); } } Implementation of signal:signal(semaphore *S) { S->value++; if (S->value list; wakeup(P); }} 22.ALLOCATION METHOD FOR DISK SPACE:1)Allocation Methods Contiguous An allocation method refers to how disk blocks are allocated for files: Contiguous allocation each file occupies set of contiguous blocks Best performance in most cases Simple only starting location (block #) and length (number of blocks) are required Problems include finding space for file, knowing file size, external fragmentation, need for compaction off-line (downtime) or on-line.

2)Allocation Methods Linked Linked allocation each file a linked list of blocks File ends at nil pointer No external fragmentation Each block contains pointer to next block No compaction, external fragmentation Free space management system called when new block needed Improve efficiency by clustering blocks into groups but increases internal fragmentation Reliability can be a problem Locating a block can take many I/Os and disk seeks FAT (File Allocation Table) variation Beginning of volume has table, indexed by block number Much like a linked list, but faster on disk and cacheable New block allocation simple Each file is a linked list of disk blocks: blocks may be scattered anywhere on the disk

File-Allocation Table

3)Allocation Methods Indexed

Example of Indexed Allocation-

Need index table Random access Dynamic access without external fragmentation, but have overhead of index block Mapping from logical to physical in a file of maximum size of 256K bytes and block size of 512 bytes. We need only 1 block for index table

Mapping from logical to physical in a file of unbounded length (block size of 512 words) Linked scheme Link blocks of index table (no limit on size)

Two-level index (4K blocks could store 1,024 four-byte pointers in outer index -> 1,048,567 data blocks and file size of up to 4GB)

22.Process Management A process is a program in execution. It is a unit of work within the system. Program is a passive entity, process is an active entity. Process needs resources to accomplish its task CPU, memory, I/O, files Initialization data Process termination requires reclaim of any reusable resources Single-threaded process has one program counter specifying location of next instruction to execute Process executes instructions sequentially, one at a time, until completion Multi-threaded process has one program counter per thread Typically system has many processes, some user, some operating system running concurrently on one or more CPUs Concurrency by multiplexing the CPUs among the processes / threads The operating system is responsible for the following activities in connection with process management: Creating and deleting both user and system processes Suspending and resuming processes Providing mechanisms for process synchronization Providing mechanisms for process communication Providing mechanisms for deadlock handling

Memory Management All data in memory before and after processing All instructions in memory in order to execute Memory management determines what is in memory when Optimizing CPU utilization and computer response to users Memory management activities Keeping track of which parts of memory are currently being used and by whom Deciding which processes (or parts thereof) and data to move into and out of memory Allocating and deallocating memory space as needed24Semaphore-A semaphore is a protected variable whose value can be accessed and altered only by the operations P and V and initialization operation called 'Semaphoiinitislize'.

Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also called general semaphores can assume only nonnegative values.TheP(or wait or sleep or down) operation on semaphores S, written asP(S)or wait (S), operates as follows:

P(S):IF S > 0THEN S := S - 1ELSE (wait on S)TheV(or signal or wakeup or up) operation on semaphore S, written asV(S)or signal (S), operates as follows: V(S):IF (one or more process are waiting on S)THEN(let one of these processes proceed)ELSE S := S +1Operations P and V are done as single, indivisible, atomic action. It is guaranteed that once a semaphore operations has stared, no other process can access the semaphore until operation has completed. Mutual exclusion on the semaphore, S, is enforced withinP(S)andV(S).If several processes attempt a P(S) simultaneously, only process will be allowed to proceed. The other processes will be kept waiting, but the implementation of P and V guarantees that processes will not suffer indefinite postponement. Semaphores solve the lost-wakeup problem.Producer-Consumer Problem Using Semaphores The Solution to producer-consumer problem uses three semaphores, namely, full, empty and mutex.The semaphore 'full' is used for counting the number of slots in the buffer that are full. The 'empty' for counting the number of slots that are empty and semaphore 'mutex' to make sure that the producer and consumer do not access modifiable shared section of the buffer simultaneously. Initialization Set full buffer slots to 0. i.e.semaphore Full = 0. Set empty buffer slots to N. i.e., semaphore empty = N. For control access to critical section set mutex to 1. i.e., semaphore mutex = 1.Producer ( )WHILE (true) produce-Item ( ); P (empty); P (mutex); enter-Item ( ) V (mutex) V (full);Consumer ( )WHILE (true) P (full) P (mutex); remove-Item ( ); V (mutex); V (empty); consume-Item (Item)

Thrashing is caused by under-allocation of the minimum number of pages required by a process, forcing it to continuously page fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the level ofmultiprogramming. It can be eliminated by reducing the level of multiprogramming.

25.Loaders A loader is a system program, which takes the object code of a program as input and prepares it for execution. Loader Function : The loader performs the following functions : Allocation - The loader determines and allocates the required memory space for the program to execute properly. Linking -- The loader analyses and resolve the symbolic references made in the object modules. Relocation - The loader maps and relocates the address references to correspond to the newly allocated memory space during execution. Loading - The loader actually loads the machine code corresponding to the object modules into the allocated memory space and makes the program ready to execute.

1)Compile-and-Go Loaders: A compile and go loader is one in which the assembler itself does the processes of compiling then place the assembled instruction inthe designated memory loactions. The assembly process is first executed and then the assembler causes a transfer to the first instruction of the program. E.G. WATFOR FORTRAN compiler This loading scheme is also called assmble-and-go Advantages of Compile-and-go loaders: Simple and easier to implement. No additional routines are required to load the compiled code into the memory.Disadvantages of Compile-and-go loaders: Wastage im memory spave due to the presence of the assembler. There is a need to re-assemble the code every time it is to be run

2)Absolute loader:The absolute loader will load the program at memory location x200:1.- The header record is checked to verify that the correct program has been presented for loading.2.- Each text record is read and moved to the indicate address in memory3.- When the end record (EOF) is encountered, the loader jumps to thespecified address to begin execution. The four functions as performed in and absolute loader are :1.Allocation 2.Linking 3.Relocation 4.LoadingAdvantages of Absolute Loader: Simple, easy to design and implement. Since more core memory is available to the user there is no memory limit.Dis advantages of Absolute Loader: The programmer must specifically tell the assembler the address where the program is to be loaded. When subroutines are referenced, the programmer must specify their address whenever they are called.

26.Protection and Security- Protection any mechanism for controlling access of processes or users to resources defined by the OS Security defense of the system against internal and external attacks Huge range, including denial-of-service, worms, viruses, identity theft, theft of service Systems generally first distinguish among users, to determine who can do what User identities (user IDs, security IDs) include name and associated number, one per user User ID then associated with all files, processes of that user to determine access control Group identifier (group ID) allows set of users to be defined and controls managed, then also associated with each process, file Privilege escalation allows user to change to effective ID with more rights

1