slow handling of large sets of files?
Larry Hall
lh-no-personal-replies-please@cygwin.com
Tue Feb 1 19:57:00 GMT 2005
At 02:28 PM 2/1/2005, you wrote:
>I too have been seeing a problem with very slow file access in large
>directories.
>
>Specifically,
>
>On a Cygwin/Win2k box, I have a mirror of an FTP site. The site has 2.5
>million files spread between 100 directories.
>(20,000 - 30,000 files per directory) I have previously run this number
>of files in an NT4 NTFS filesystem without significant performance
>problems.
>
>In this site, operations like these are __VERY__ slow.
>ls ${some_dir}
>ls ${some_dir}/${some_path}
>cp ${some_file} ${some_path}
>cp -R ${some_path_with _only_a_few_files} ${some_path}
>
>
>If I look at the performance monitor, I can see a queue depth of 1-2 and
>300-500 disk reads per second. (That's real. It's a fast array) The
>reads appear to be single-block reads, as the throughput during these
>events is 1.5 - 3MB/sec.
>
>I am beginning to think the disk activity relates to NTFS permission
>checking, which can be complex under Win2k.
>
>I don't know how to debug or tune this.
>
>Any ideas?
Have you looked at the '-x', '-X', and '-E' flags of mount?
--
Larry Hall http://www.rfk.com
RFK Partners, Inc. (508) 893-9779 - RFK Office
838 Washington Street (508) 893-9889 - FAX
Holliston, MA 01746
--
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Problem reports: http://cygwin.com/problems.html
Documentation: http://cygwin.com/docs.html
FAQ: http://cygwin.com/faq/
More information about the Cygwin
mailing list