copying a million tiny files?
Thu Nov 1 06:57:00 GMT 2007
>From what Gary mentions..... indeed rsync is the best way to go.
At least for thinking, on time backups.
With rsync, only the first time is slow.
For one shot backups of many files,,,,,using tar to group them into one and
then sending is a good idea.
Using xcopy, is kind of silly and wont get you compatiblity...... especially in scripts....
Gary R. Van Sickle <firstname.lastname@example.org> wrote:
> > From: Brian Dessent
> > sam reckoner wrote:
> > > I'm not exaggerating. I have over one million small files
> > that like to
> > > move between disks. The problem is that even getting a directory
> > > listing takes forever.
> > >
> > > Is there a best practice for this?
> > I know it's heresy but if you just want to copy files why not
> > use the native XCOPY? It will not suffer the performance
> > degredation of having to emulate the POSIX stat semantics on
> > every file, just like the native DIR command in a large
> > directory does not take ages because it simply uses
> > FindFirstFile/FindNextFile which are fairly efficient (but do
> > not provide sufficient information to emulate POSIX.)
> > Brian
> I have a similar situation to the OP (copying many thousands of small files
> over a fairly slow link), and actually timed using XCOPY vs. Cygwin methods
> (cp in my case). It didn't make a significant difference. Ultimately what
> I think you run into in these sorts of situations is that you bump up
> against the slowness of the link (or physical disk) because, POSIX emulation
> or not, all your caches do is thrash.
> Gary R. Van Sickle
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Problem reports: http://cygwin.com/problems.html
More information about the Cygwin