Virtual Memory

9
FAT & NTFS File Systems in Windows XP Version 1.2 — Last Updated September 1, 2002 Hold mouse here for list of most recent changes. Receive notice whenever this page is updated. by Alex Nichol, MS-MVP © 2002 by Author, All Rights Reserved 1. INTRODUCTION Files in Windows XP can be organised on the hard disk in two different ways. The old FAT (File Allocation Table) file system was developed originally (when the original IBM PCs came out) for MS-DOS on small machines and floppy disks. There are variants — FAT12 is used on all floppy disks, for example — but hard disk partitions in Windows XP can be assumed to use the FAT32 version, or 32-bit File Allocation Table. Later, a more advanced file system was developed for hard disks in Windows NT, called NTFS (the “NT File System”). This has matured, through several versions, into the latest one that exists alongside FAT in Windows XP. The file system used goes with an individual partition of the disk. You can mix the two types on the same physical drive. The Windows XP operating system is the same, whichever file system is used for its partition, so it is a mistake (and source of confusion) to speak of “a FAT disk reading an NTFS partition.” It is the operating system, not the disk, that does the reading. Actual files are unaffected by which file system they are on; that is merely a matter of a method of storage. An analogy would be letters stored in an office. They might be in box-files on shelves (FAT) or in suspended folders in file cabinets (NTFS); but the letters themselves would be unaffected by the choice of which way to store them, and could be moved from one storage place to the other. Similarly, files can be moved between folders on an NTFS partition and folders on a FAT partition, or across a network to another machine that might not even be running Windows. EXAMPLE: Consider the downloading to your computer of a file through a link on a web page. You click on the link, and the file is copied across the Internet and stored on your hard drive. If you download the file from this present site, the file is stored on a computer running Unix, which uses neither FAT nor NTFS. The file itself is not affected when it is copied from a Windows computer to the Unix-based server, or copied from that server to your Windows-based computer. However, if a machine has two different operating systems on it, dual booted, they may not both be able to read both types of partition. DOS (including an Emergency Startup boot floppy), Windows 95/98, and Windows ME cannot handle NTFS (without third party assistance). Early versions of Windows NT cannot handle FAT32, only FAT16. So, if you have such a mixed environment, any communal files must be held on a partition of a type that both operating systems can understand — meaning, usually, a FAT32 partition. (See the article Planning Your Partitions on this site, under the section “Multiple Operating Systems,” for a table of which file system each recent version of Windows can use and understand.) 2. WHICH SYSTEM TO USE? There are three considerations that affect which file system should be chosen for any partition: a. Do you want to use the additional capabilities that only NTFS supports?

description

Virtual memory in windows xp & fat32 vs ntfs in windows xp

Transcript of Virtual Memory

Page 1: Virtual Memory

FAT & NTFS File Systemsin Windows XPVersion 1.2 — Last Updated September 1, 2002Hold mouse here for list of most recent changes.Receive notice whenever this page is updated.

by Alex Nichol, MS-MVP© 2002 by Author, All Rights Reserved

1. INTRODUCTION

Files in Windows XP can be organised on the hard disk in two different ways.

• The old FAT (File Allocation Table) file system was developed originally (when theoriginal IBM PCs came out) for MS-DOS on small machines and floppy disks. There arevariants — FAT12 is used on all floppy disks, for example — but hard disk partitions inWindows XP can be assumed to use the FAT32 version, or 32-bit File Allocation Table.

• Later, a more advanced file system was developed for hard disks in Windows NT, calledNTFS (the “NT File System”). This has matured, through several versions, into the latestone that exists alongside FAT in Windows XP.

The file system used goes with an individual partition of the disk. You can mix the two types onthe same physical drive. The Windows XP operating system is the same, whichever file system isused for its partition, so it is a mistake (and source of confusion) to speak of “a FAT disk reading anNTFS partition.” It is the operating system, not the disk, that does the reading.

Actual files are unaffected by which file system they are on; that is merely a matter of a method ofstorage. An analogy would be letters stored in an office. They might be in box-files on shelves(FAT) or in suspended folders in file cabinets (NTFS); but the letters themselves would beunaffected by the choice of which way to store them, and could be moved from one storage place tothe other. Similarly, files can be moved between folders on an NTFS partition and folders on a FATpartition, or across a network to another machine that might not even be running Windows.

EXAMPLE: Consider the downloading to your computer of a file through a link on a web page.You click on the link, and the file is copied across the Internet and stored on your hard drive. If youdownload the file from this present site, the file is stored on a computer running Unix, which usesneither FAT nor NTFS. The file itself is not affected when it is copied from a Windows computer tothe Unix-based server, or copied from that server to your Windows-based computer.

However, if a machine has two different operating systems on it, dual booted, they may not both beable to read both types of partition. DOS (including an Emergency Startup boot floppy), Windows95/98, and Windows ME cannot handle NTFS (without third party assistance). Early versions ofWindows NT cannot handle FAT32, only FAT16. So, if you have such a mixed environment, anycommunal files must be held on a partition of a type that both operating systems can understand —meaning, usually, a FAT32 partition. (See the article Planning Your Partitions on this site, under thesection “Multiple Operating Systems,” for a table of which file system each recent version ofWindows can use and understand.)

2. WHICH SYSTEM TO USE?

There are three considerations that affect which file system should be chosen for any partition:

a. Do you want to use the additional capabilities that only NTFS supports?

Page 2: Virtual Memory

NTFS can provide control of file access by different users, for privacy and security. TheHome Edition of Windows XP only supports this to the limited extent of keeping each user’sdocuments private to him or herself. Full file-access control is provided in Windows XPProfessional, as is encryption of individual files and folders. If you use encryption it isessential to back up the encryption certificates used — otherwise, if the partition containingyour "Documents and Settings" has to be reformatted, the files will be irretrievably lost.

b. Considerations of Stability and Resilience NTFS has stronger means of recovering from troubles than does FAT. All changes to filesare “journalized,” which allows the system to roll back the state of a file after a crash of theprogram using it or a crash of the system. Also, the structure of the file system is less likelyto suffer damage in a crash, and is therefore more easily reinstated by CheckDisk(CHKDSK.EXE). But in practical terms, the stability of FAT is adequate for many users,and it has the benefit that a FAT partition is accessible for repair after booting from a DOSmode startup floppy, such as one from Windows 98. If an NTFS partition is so damaged thatit is not possible to boot Windows, then repair can be very difficult.

c. Considerations of economy and performance In a virtual memory system like Windows XP, the ideal size of disk clusters matches theinternal “page size” used by the Intel processors — 4 kilobytes. An NTFS partition of almostany size you are likely to meet will use this, but it is only used in FAT32 up to an 8 GBpartition. Above that, the cluster size for FAT increases, and the wastefulness of the “clusteroverhang” grows. (For a table of the varying default cluster sizes used by FAT16, FAT32,and Win XP’s version of NTFS, for partitions of varying sizes, click here.)

On the other hand NTFS takes much more space for holding descriptive information onevery file in that file’s own block in the Master File Table (MFT). This can use quite a largeproportion of the disk, though this is offset by a possibility that the data of a very small filemay be stored entirely in its MFT block. Because NTFS holds significant amounts of thesestructures in memory, it places larger demands on memory than does FAT.

Searching directories in NTFS uses a more efficient stucture for its access to files, sosearching a FAT partition is a slower process in big directories. Scanning the FAT for thepieces of a fragmented file is also slower. On the other hand, NTFS carries the overhead ofmaintaining the “journalized” recovery.

Also, of course, in a dual boot system, there may be the overriding need to use FAT on a partition sothat it can also be read from, say, Windows 98.

3. ON BALANCE

Leaving matters of access control and dual use aside, as partition sizes grow, the case for NTFS getsstronger. Microsoft definitely recommends NTFS for partitions larger than 32 GB — to the extentthat Windows XP will not format a FAT partition above that size. However, with smaller sizes,FAT is likely to be more efficient — certainly below 4 GB, and probably below 8 GB. I suggest thatNTFS should be used for partitions of 16 GB or above, where the FAT 32 cluster size goes up to 16KB, the intermediate region (that is, partitions between 8 and 16 GB in size) being largely a matterof taste.

Page 3: Virtual Memory

4. CAN I CONVERT ONE SYSTEM TO THE OTHER?

Ideally, a disk is initially formatted in the file system which is to be used permanently — NTFS, forexample, can then put the Master File Table in its optimal location in the middle of the partition.

However, on an upgrade of an existing system, the file system is left as it is. For example, anupgraded Windows 98 system will be on FAT32. Also, some computer makers ship new computerswith all partitions formatted as FAT32. These can be converted to NTFS if that seems more suitableto your needs. If you use the method described here, the result will be nearly as satisfactory as if afresh format to NTFS had been done.

But this conversion is a one-way process. Windows XP provides a native tool for converting FAT toNTFS, but no tool for converting NTFS to FAT. It may be possible to convert NTFS to FAT usingPartition Magic 7.01, but the result is uncertain. It you attempt it, it is essential that you first decryptall encrypted files, or they will be forever inaccessible. (For this reason, Partition Magic will stop ifit finds one.) If it is a new machine, too, be sure that your warranty will not be compromised bydoing a file system conversion.

A further aspect that needs caution is that the conversion may result in the NTFS permissions on thepartition and its folders being not the simple general access that might be expected. It is certainlyimportant that the conversion be done when logged in as an Administrator.

5. BACKUP & DISK IMAGING

Will a backup or image made from NTFS remain NTFS if I restore to a newly formatted partition?

This depends on the approach of the particular backup program you use. It may make an exactimage of the partition, including the file system’s structures, in which case the restored partitionwill be exactly as the original. (Indeed, any format of the drive before restoring the drive image notonly is unnecessary, but all that it accomplishes will be overwritten when you restore the image.)Or, the software may work on a file-by-file basis, in which case the files themselves will be restored— to whatever file system has been used in formatting the partition to which you restore them. But,again, note that a file-by-file restore from a backup of NTFS to a FAT partition will result inencrypted files being unreadable, because there is no way to decrypt them on FAT!

http://aumha.org/win5/a/xpvm.php

Virtual Memory in Windows XPVersion 1.6 — Last Updated February 21, 2006Hold mouse here for list of most recent changes.

by Alex Nichol(MS-MVP - Windows Storage Management/File Systems)© 2002-2005 by Author, All Rights Reserved

Page 4: Virtual Memory

Introduction

This page attempts to be a stand-alone description for general users of the way Virtual Memoryoperates in Windows XP. Other pages on this site are written mainly for Windows 98/ME (seeWindows 98 & Win ME Memory Management) and, while a lot is in common, there are significantdifferences in Windows XP.

What is Virtual Memory?

A program instruction on an Intel 386 or later CPU can address up to 4GB of memory, using its full32 bits. This is normally far more than the RAM of the machine. (The 32nd exponent of 2 is exactly4,294,967,296, or 4 GB. 32 binary digits allow the representation of 4,294,967,296 numbers —counting 0.) So the hardware provides for programs to operate in terms of as much as they wish ofthis full 4GB space as Virtual Memory, those parts of the program and data which are currentlyactive being loaded into Physical Random Access Memory (RAM). The processor itself thentranslates (‘maps’) the virtual addresses from an instruction into the correct physical equivalents,doing this on the fly as the instruction is executed. The processor manages the mapping in terms ofpages of 4 Kilobytes each - a size that has implications for managing virtual memory by the system.

What are Page Faults?

Only those parts of the program and data that are currently in active use need to be held in physicalRAM. Other parts are then held in a swap file (as it’s called in Windows 95/98/ME: Win386.swp)or page file (in Windows NT versions including Windows 2000 and XP: pagefile.sys). When aprogram tries to access some address that is not currently in physical RAM, it generates aninterrupt, called a Page Fault. This asks the system to retrieve the 4 KB page containing theaddress from the page file (or in the case of code possibly from the original program file). This — avalid page fault — normally happens quite invisibly. Sometimes, through program or hardwareerror, the page is not there either. The system then has an ‘Invalid Page Fault’ error. This will be afatal error if detected in a program: if it is seen within the system itself (perhaps because a programsent it a bad request to do something), it may manifest itself as a ‘blue screen’ failure with a STOPcode: consult the page on STOP Messages on this site.

If there is pressure on space in RAM, then parts of code and data that are not currently needed canbe ‘paged out’ in order to make room — the page file can thus be seen as an overflow area to makethe RAM behave as if it were larger than it is.

What is loaded in RAM?

Items in RAM can be divided into:

• The Non-Paged area. Parts of the System which are so important that they may never bepaged out - the area of RAM used for these is called in XP the ‘Non-Paged area’. Becausethis mainly contains core code of the system, which is not likely to contain serious faults, aBlue Screen referring to ‘Page Fault in Non-Paged area’ probably indicates a serioushardware problem with the RAM modules, or possibly damaged code resulting from adefective Hard disk. It is, though, possible that external utility software (e.g. Norton) mayput modules there too, so if such faults arise when you have recently installed or updatedsomething of this sort, try uninstalling it.

Page 5: Virtual Memory

• The Page Pool which can be used to hold: • Program code, • Data pages that have had actual data written to them, and • A basic amount of space for the file cache (known in Windows 9x systems as

Vcache) of files that have recently been read from or written to hard disk.

Any remaining RAM will be used to make the file cache larger.

Why is there so little Free RAM?

Windows will always try to find some use for all of RAM — even a trivial one. If nothing else itwill retain code of programs in RAM after they exit, in case they are needed again. Anything leftover will be used to cache further files — just in case they are needed. But these uses will bedropped instantly should some other use come along. Thus there should rarely be any significantamount of RAM ‘free’. That term is a misnomer — it ought to be ‘RAM for which Windows cancurrently find no possible use’. The adage is: ‘Free RAM is wasted RAM’. Programs that purportto ‘manage’ or ‘free up’ RAM are pandering to a delusion that only such ‘Free’ RAM is availablefor fresh uses. That is not true, and these programs often result in reduced performance and mayresult in run-away growth of the page file.

Where is the page file?

The page file in XP is a hidden file called pagefile.sys. It is regenerated at each boot — there is noneed to include it in a backup. To see it you need to have Folder Options | View set to ‘ShowHidden and System files’, and not to ‘Hide Protected mode System files’.

In earlier NT systems it was usual to have such a file on each hard drive partition, if there weremore than one partition, with the idea of having the file as near as possible to the ‘action’ on thedisk. In XP the optimisation implied by this has been found not to justify the overhead, andnormally there is only a single page file in the first instance.

Where do I set the placing and size of the page file?

At Control Panel | System | Advanced, click Settings in the “Performance” Section. On theAdvanced page of the result, the current total physical size of all page files that may be in existenceis shown. Click Change to make settings for the Virtual memory operation. Here you can select anydrive partition and set either ‘Custom’; ‘System Managed’ or ‘No page file’; then always click Setbefore going on to the next partition.

Should the file be left on Drive C:?

The slowest aspect of getting at a file on a hard disk is in head movement (‘seeking’). If you haveonly one physical drive then the file is best left where the heads are most likely to be, so where mostactivity is going on — on drive C:. If you have a second physical drive, it is in principle better to putthe file there, because it is then less likely that the heads will have moved away from it. If, though,you have a modern large size of RAM, actual traffic on the file is likely to be low, even if programsare rolled out to it, inactive, so the point becomes an academic one. If you do put the file elsewhere,you should leave a small amount on C: — an initial size of 2MB with a Maximum of 50 is suitable

Page 6: Virtual Memory

— so it can be used in emergency. Without this, the system is inclined to ignore the settings andeither have no page file at all (and complain) or make a very large one indeed on C:

In relocating the page file, it must be on a ‘basic’ drive. Windows XP appears not to be willing toaccept page files on ‘dynamic’ drives.

NOTE: If you are debugging crashes and wish the error reporting to make a kernel or full dump,then you will need an initial size set on C: of either 200 MB (for a kernel dump) or the size of RAM(for a full memory dump). If you are not doing so, it is best to make the setting to no more than a‘Small Dump’, at Control Panel | System | Advanced, click Settings in the ‘Startup and Recovery’section, and select in the ‘Write Debug information to’ panel

Can the Virtual Memory be turned off on a really large machine?

Strictly speaking Virtual Memory is always in operation and cannot be “turned off.” What is meantby such wording is “set the system to use no page file space at all.”

Doing this would waste a lot of the RAM. The reason is that when programs ask for an allocation ofVirtual memory space, they may ask for a great deal more than they ever actually bring into use —the total may easily run to hundreds of megabytes. These addresses have to be assigned tosomewhere by the system. If there is a page file available, the system can assign them to it — ifthere is not, they have to be assigned to RAM, locking it out from any actual use.

How big should the page file be?

There is a great deal of myth surrounding this question. Two big fallacies are:

• The file should be a fixed size so that it does not get fragmented, with minimum andmaximum set the same

• The file should be 2.5 times the size of RAM (or some other multiple)

Both are wrong in a modern, single-user system. A machine using Fast User switching is a specialcase, discussed below.)

Windows will expand a file that starts out too small and may shrink it again if it is larger thannecessary, so it pays to set the initial size as large enough to handle the normal needs of your systemto avoid constant changes of size. This will give all the benefits claimed for a ‘fixed’ page file. Butno restriction should be placed on its further growth. As well as providing for contingencies, likeunexpectedly opening a very large file, in XP this potential file space can be used as a place toassign those virtual memory pages that programs have asked for, but never brought into use. Untilthey get used — probably never — the file need not come into being. There is no downside inhaving potential space available.

For any given workload, the total need for virtual addresses will not depend on the size of RAMalone. It will be met by the sum of RAM and the page file. Therefore in a machine with smallRAM, the extra amount represented by page file will need to be larger — not smaller — than thatneeded in a machine with big RAM. Unfortunately the default settings for system management ofthe file have not caught up with this: it will assign an initial amount that may be quite excessive fora large machine, while at the same leaving too little for contingencies on a small one.

How big a file will turn out to be needed depends very much on your work-load. Simple wordprocessing and e-mail may need very little — large graphics and movie making may need a greatdeal. For a general workload, with only small dumps provided for (see note to ‘Should the file beleft on Drive C:?’ above), it is suggested that a sensible start point for the initial size would be the

Page 7: Virtual Memory

greater of (a) 100 MB or (b) enough to bring RAM plus file to about 500 MB. EXAMPLE: Set theInitial page file size to 400 MB on a computer with 128 MB RAM; 250 on a 256 MB computer; or100 MB for larger sizes.

But have a high Maximum size — 700 or 800 MB or even more if there is plenty of disk space.Having this high will do no harm. Then if you find the actual pagefile.sys gets larger (as seen inExplorer), adjust the initial size up accordingly. Such a need for more than a minimal initial pagefile is the best indicator of benefit from adding RAM: if an initial size set, for a trial, at 50MB nevergrows, then more RAM will do nothing for the machine's performance.

Bill James MS MVP has a convenient tool, ‘WinXP-2K_Pagefile’, for monitoring the actual usageof the Page file, which can be downloaded here. A compiled Visual Basic version is available fromDoug Knox's site which may be more convenient for some users. The value seen for ‘Peak Usage’over several days makes a good guide for setting the Initial size economically.

Note that these aspects of Windows XP have changed significantly from earlier Windows NTversions, and practices that have been common there may no longer be appropriate. Also, the ‘PFUsage’ (Page File in Use) measurement in Task Manager | Performance for ‘Page File in Use’include those potential uses by pages that have not been taken up. It makes a good indicator of theadequacy of the ‘Maximum’ size setting, but not for the ‘Initial’ one, let alone for any need for moreRAM.

Should the drive have a big cluster size?

While there are reports that in Windows 95 higher performance can be obtained by having the swapfile on a drive with 32K clusters, in Windows XP the best performance is obtained with 4K ones —the normal size in NTFS and in FAT 32 partitions smaller than 8GB. This then matches the size ofthe page the processor uses in RAM to the size of the clusters, so that transfers may be made directfrom file to RAM without any need for intermediate buffering

What about Fast User Switching then?

If you use Fast User Switching, there are special considerations. When a user is not active, there willneed to be space available in the page file to ‘roll out’ his or her work: therefore, the page file willneed to be larger. Only experiment in a real situation will establish how big, but a start point mightbe an initial size equal to half the size of RAM for each user logged in.

Problems with Virtual Memory

It may sometimes happen that the system give ‘out of memory’ messages on trying to load aprogram, or give a message about Virtual memory space being low. Possible causes of this are:

• The setting for Maximum Size of the page file is too low, or there is not enough disk spacefree to expand it to that size.

• The page file has become corrupt, possibly at a bad shutdown. In the Virtual Memorysettings, set to “No page file,” then exit System Properties, shut down the machine, andreboot. Delete PAGEFILE.SYS (on each drive, if more than just C:), set the page file upagain and reboot to bring it into use.

• The page file has been put on a different drive without leaving a minimal amount on C:.

• There is trouble with third party software. In particular, if the message happens at shutdown,

Page 8: Virtual Memory

suspect a problem with Symantec’s Norton Live update, for which there is a fix posted here.It is also reported that spurious messages can arise if NAV 2004 is installed. If the problemhappens at boot and the machine has an Intel chipset, the message may be caused by an earlyversion (before version 2.1) of Intel’s “Application Accelerator.” Uninstall this and then getan up-to-date version from Intel’s site.

• Another problem involving Norton Antivirus was recently discovered by MS-MVP RonMartell. However, it only applies to computers where the pagefile has been manually resizedto larger than the default setting of 1.5 times RAM — a practice we discourage. On suchmachines, NAV 2004 and Norton Antivirus Corporate 9.0 can cause your computer to revertto the default settings on the next reboot, rather than retain your manually configuredsettings. (Though this is probably an improvement on memory management, it can bemaddening if you don’t know why it is happening.) Symantec has published separate repairinstructions for computers with NAV 2004 and NAV Corporate 9.0 installed. [Added byJAE 2/21/06.]

• Possibly there is trouble with the drivers for IDE hard disks; in Device Manager, remove theIDE ATA/ATAPI controllers (main controller) and reboot for Plug and Play to start over.

• With an NTFS file system, the permissions for the page file’s drive’s root directory mustgive “Full Control” to SYSTEM. If not, there is likely to be a message at boot that thesystem is “unable to create a page file.”

http://www.rojakpot.com/showarticle.aspx?artno=143

Virtual Memory Optimization Guide Rev. 4.1! Digg this article: | Bookmark this article: Virtual Memory

Back in the 'good old days' of command prompts and 1.2MB floppy disks, programs needed verylittle RAM to run because the main (and almost universal) operating system was Microsoft DOSand its memory footprint was small. That was truly fortunate because RAM at that time washorrendously expensive. Although it may seem ludicrous, 4MB of RAM was considered then to bean incredible amount of memory.

However when Windows became more and more popular, 4MB was just not enough. Due to itsGUI (Graphical User Interface), it had a larger memory footprint than DOS. Thus, more RAMwas needed.

Unfortunately, RAM prices did not decrease as fast as RAM requirement had increased. This meantthat Windows users had to either fork out a fortune for more RAM or run only simple programs.Neither were attractive options. An alternative method was needed to alleviate this problem.

The solution they came up with was to use some space on the hard disk as extra RAM. Although thehard disk is much slower than RAM, it is also much cheaper and users always have a lot more harddisk space than RAM. So, Windows was designed to create this pseudo-RAM or in Microsoft'sterms - Virtual Memory, to make up for the shortfall in RAM when running memory-intensiveprograms.

Page 9: Virtual Memory

FloatingFrame

How Does It Work?

Virtual memory is created using a special file called a swapfile or paging file.

Whenever the operating system has enough memory, it doesn't usually use virtual memory. But if itruns out of memory, the operating system will page out the least recently used data in the memory tothe swapfile in the hard disk. This frees up some memory for your applications. The operatingsystem will continuously do this as more and more data is loaded into the RAM.

However, when any data stored in the swapfile is needed, it is swapped with the least recently useddata in the memory. This allows the swapfile to behave like RAM although programs cannot rundirectly off it. You will also note that because the operating system cannot directly run programs offthe swapfile, some programs may not run even with a large swapfile if you have too little RAM.