This is the mail archive of the cygwin mailing list for the Cygwin project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Running 4096 parallel cat processes on remote share results in only 1018 succeeding

Nathan Fairchild wrote:
When I run a script like so:
cat: /u/pe/env_files/ No such file or directory ./ fork: retry: Resource temporarily unavailable ./ fork: Resource temporarily unavailable $ grep -l PATH out* | wc -l 1018

I think I'm probably hitting the 256 process limit because of the I/O slowdown the network presents? I don't get this issue running on (much faster) local disk.
You are only reading from the net? or are you copying
to the to the net too?  I.e. what is your CWD?  is out$i.log
on the net or local?  I tried it locally and couldn't
reproduce your symptoms.

Your problem is more likely the server hosting the remote file system.

While you can write files locally that fast, the remote server adds
enough "ms"/file delay that it can't keep up.  Even processing your
Network requests take cpu time.

/i/fdlims> cat mrun
ulimit -n 3200
for i in $(seq $1)
do exec cat mrun >/tmp/tmp$i.log&
/i/fdlims> ll /tmp/tmp*|wc
   4096   28672  191405
I'm not sure how accurate the ulimit command inside
cygwin is... may be accurate, just saying I don't know.

Problem reports:
Unsubscribe info:

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]