Using Windows Explorer to copy large numbers of files from one drive to another slows my system down enormously. Is there a way to avoid this?
Microsoft has gradually introduced a number of low-level features to make I/O-intensive processes like copying files less burdensome on Windows systems. Some of them went hand in hand with advances in system hardware -- like when Direct Memory Access was introduced as a standard hard-drive technology -- but some of these features have evolved on their own course.
Most of us know about the concept of process priority ("niceness" in the Unix/Linux world). A process with a lower priority in Windows is handled only when the CPU has nothing else to do, while a process with a higher priority is given a bigger slice of the CPU pie.
Eventually, Microsoft added another kind of priority to Windows Explorer files: I/O priority. This is more or less what it sounds like: Processes with lower I/O priorities are must use a smaller share of the available I/O in a system, so that more urgent processes don't have to take a back seat.
Using process and I/O priority is useful dealing with a large copy operation. By default, though, these are not exploited by just copying files from one place to another. You have to craft your copy operation specifically to take advantage of them.
I did this by creating a batch file named fastmove to copy things from Location A to Location B with minimal impact on the system.
The source for fastmove looks like this:
start /low xcopy %1 %2 /s /j
and is invoked like this:
fastmove [SOURCE] [TARGET]
Let's look at the instructions used in fastmove.
First up is the start command. It is an underappreciated little gem -- start lets you launch another process at a given priority or even CPU affinity. The /low switch here means that the stated program is started with the lowest possible CPU and I/O priority, so that, no matter what it does, any other programs running (and any other actions by the user) will always take precedence.
The other half of the magic here is supplied by the xcopy command, which allows for a broader range of options than the regular copy command. One of them, the /j switch, tells the system that copying should be done using unbuffered I/O. This is handy when copying large or small files for two reasons:
- It reduces the amount of overhead needed to move the files. Copying with buffered I/O adds that much more processing up front for the benefit of speedier retrieval of the files later on, but, if you're just moving a bunch of files from one place to another, the benefit of using buffered I/O won't be as great.
- It keeps the system data cache from being overloaded. The buffering process is accomplished by using the same RAM that the system uses for storing other kinds of data that benefit from being cached. If you flood that buffer with I/O from a background copy operation, it slows down everything in the foreground.
Next time you find yourself with many gigabytes of files to be moved, try using fastmove and see how much lighter the load is on the rest of your system as a result.
Also note that xcopy has an even more advanced cousin, robocopy, which sports options that are even more sophisticated. Readers can try robocopy in place of xcopy, especially if they want to use robocopy's network-specific options as a way to ensure files copied across a network are transferred reliably.
Do you have questions for our experts? Email firstname.lastname@example.org.
This was first published in February 2013