Link and Information Page

This Page, I hope will provide you some useful information, kool links and files.

1asp030.gif 2.1 K All information you see is Free to use.

1asp030.gif 2.1 KBeverly Hills Software One of the premear sites on the WEB to obtain 32 bit software and information for MS Windows 985 & NT 4.0 , now available for download.

1asp030.gif 2.1 K

redlite1.gif 1.0 K  

Windows NT Tips & Tricks

1999                        
Efficient NTFS Partitions

How you set up and use an NTFS partition can have a great deal of effect on performance. Two new but differently set up partitions can yield drastically different performance, and response time can degrade over time, even if you keep the partition defragmented. Here are the main factors and what you can do about them.

The Partition Itself

Partitions should be created NTFS, not converted from FAT. On a newly formatted NTFS partition, the Master File Table (MFT) is created at the beginning of the disk, and about 12 1/2% of the disk is reserved for the MFT. But when you convert a FAT partition to NTFS, the MFT is placed wherever there is free space. It almost always ends up badly fragmented. See the 5 January 1998 article "MFT Fragmentation" for details.
Large partitions should be avoided. They take a lot longer to backup and to restore, data corruption becomes more likely because there is so much more writing going on, and access to the disk becomes slower (it takes longer to find, read and write files). Of course, there are valid reasons for very large partitions. If you have a 5 GB database or you work with large video files, you'll need big partitions. Just don't make them big if you can avoid it; one to two GB is about right.
It's also a good idea to have specialized partitions: System, Applications, Data, Temp, etc. This will increase safety and performance See the 8 July 1997 article "Configuring Your Partitions" for details. Note: That article recommended FAT for the system partition; You may find that NTFS is better, especially if you are security conscious, but also because of the NTFS self-repair capabilities.

Directories

It's nice to have deep, multi-branched directory trees; I like the logical organization, keeping separate types of files neatly sorted. However, deep trees can really slow things down, and the sequence in which you create directories can make a big difference. Fortunately, it's easy to clean up. Here are the details:
Under NTFS, each directory is a file just like any other, but with a special internal structure called a "B+ Tree". It's fairly complicated, but for our purposes it's enough to say that it is a very good structure for a directory tree, but can be weak on handling changes. In other words, the more changes you make, the more complicated it gets internally, so the longer it takes to locate your file. Since files are listed in the directory file alphabetically by file name, adding new files (or directories) can require changes in the middle of the tree structure. Many such changes can make the structure quite complex, and more complexity means less speed.
Files are located by searching through the directories. If you are looking for a file in a tree that is ten levels deep, you have to locate ten directories before you get to the one that points to the file itself. That takes a lot longer than locating a file that is only three levels deep. Plus, if the directories have been changed a lot so that their internal structure has become complex, finding files can become very slow.
Directories tend to grow, but rarely shrink. Sometimes when you add a new file or directory, it can be fitted into the space left by a deleted file, but often it uses a new space. The directory grows and can fragment, slowing down access even more.
Long file names can cause directories and the MFT to fragment. The way the file names are stored, each character requires two bytes. For computer efficiency, the DOS 8-dot-3 format is best. On the other hand, for human efficiency, 20 to 30 character names are much better. Of course, there are exceptions, such as files on a CD-ROM or an archive partition where they won't be re-written, but in general, don't go over thirty characters.
Diskeeper 4.0 can defragment directories, which helps a lot, but this will not handle the internal complexity. To clean that up and restore the directory to its initial perfect state, just copy the directory (with the copy under the same parent directory as the original, of course), giving it a new name, then delete the original, then rename the copy to the original name. This should be done periodically (once or twice a year?) if you frequently create and delete files, or whenever you delete a large number of files from a single directory. Since this changes the location of the directory file, it's a good idea to make a list of all of the directories that you want to clean up, and do them all at once. Then use Diskeeper to do a boot-time consolidation afterwards. This will move the directories together and defragment them.

Compression

The value of compression has, in my opinion, mostly disappeared since the hard disk prices crashed. It's fine for archives and such, where fragmentation and performance issues aren't very important, but for your active partitions, it can really slow you down.
When a file is compressed, it is compressed in units of 16 clusters. For each unit, the MFT record contains the Logical Cluster Number (LCN) and the number of clusters actually used, plus an entry containing an LCN of -1 and the number of clusters the last entry needs to be de-compressed. What you have is, in effect, a file fragmented into 8-cluster fragments (on average)! If the file is large enough, there will be too many compressed units to be recorded in one MFT record, so one or more additional MFT entries will have to be used. If you compress an entire partition which has a large number of files on it, the MFT may fill its pre-allocated space and overflow in fragments into the rest of the disk.
When you decompress a file, each unit is decompressed and written to the disk; they may or may not be written contiguously. But the extended MFT entries allocated during the compression of the file will still be in ue by that file. You can copy a single, formerly compressed file off the partition to another partition, delete the formerly compressed file, defragment the partition, and copy the file back onto the partition. That will reverse most of the compression/decompression side-effects for that file. But there are now excess MFT records in the MFT serving no purpose. (In a test done at Executive Sofware, compressing a 271MB file created 467 excess MFT records!) The only way to completely remove all of the side-effects of compression is to backup or copy all of the data off of the partition, reformat, and restore the data.
This can be simplified for annual maintenance. This procedure involves moving your partitions to different physical locations, but that does not matter except for the boot partition. Do not use this method for the boot partition! If you create all of your partitions the same size, then you can start by reformatting your Temp partition and copying one of the other partitions to it. Then reformat the partition you copied, and copy another partition to it. Continue in this manner till you have done all of your partitions, then change the partition letters and names so the data is on the correct partitions. Reboot and you're done.

Cluster Size

In the 20 October 1997 article "Cluster Sizes", I described the pros and cons of NTFS cluster sizes. New data regarding the MFT and its internal functions leads me to recommend 4096KB as the best cluster size, especially if you will have a very large number of files or will be using compression. Never use less than 1024KB, as this will allow MFT records to fragment, and never exceed 4096KB, as compression and Diskeeper will not work.

******************************************************************

-Tip 2- -Tip 3- -Tip 4-

Best experienced with
ie_animated.gif 9.1 K
Click Here to start.

©1997-99 System Wide Resources, Inc.