cygrunsrv + sshd + rsync = 20 times too slow -- throttled?

Ken Brown
Thu Sep 2 13:01:02 GMT 2021

On 9/2/2021 4:17 AM, Corinna Vinschen wrote:
> On Sep  1 19:02, Ken Brown wrote:
>> On 9/1/2021 9:52 AM, Corinna Vinschen wrote:
>>> Great idea that.  What we need would be some semaphore upside down.
>>> One that can be used to count items and which is signalled if it's
>>> down to zero.
>> Here's an idea (untested), based on
>> We create an ordinary Windows semaphore and use it to count the readers: It
>> starts at 0, we increment it by calling ReleaseSemaphore when a reader is
>> opened (by fhandler_pipe::create, fork/exec, dup), and we decrement it by
>> calling WFSO when a reader closes.  When we decrement it, we test whether
>> it's been reduced to 0.  We do this by calling ReleaseSemaphore and using
>> its lpPreviousCount argument.
>> We also create an event that we can use to make WFMO return during a
>> blocking write.  We signal this event if a reader closes and we've
>> discovered that there are no more readers.  In this case we cancel the
>> pending write [*] and return an error.
>> I'm sure I've overlooked something, but does this seem feasible?
> It could work, but the problem with all these approaches is that they
> are tricky and bound to fail as soon as a process is killed or crashes.
>> [*] I don't know offhand if Windows provides a way to cancel a pending
>> write. If not, we could use query_hdl to drain the pipe.
> There's a CancelIoEx function to cancel all async IO on a handle.
> In a lucid moment tonight, I had another idea.
> First of all, scratch my patch.  Also, revert select to check only
> for WriteQuotaAvailable.
> Next, for sanity, let's assume that non-blocking reads don't change
> WriteQuotaAvailable.  So the only important case is the blocking read,
> which reduces WriteQuotaAvailable by the number of requested bytes.
> Next, fact is, we're only interested in WriteQuotaAvailable > 0.
> And we have a buffersize of 64K.
> We can also safely assume that we only have a very small number of
> readers, typically only one.
> So here's the crazily simple idea:
> What if the readers never request more than, say, 50 or even 25% of the
> available buffer space?  Our buffer is 64K and there's no guarantee that
> any read > PIPE_BUF (== 4K) is atomic anyway.  This can work without
> having to check the other side of the pipe.  Something like this,
> ignoring border cases:
> pipe::create()
> {
>     [...]
>     mutex = CreateMutex();
> }
> pipe::raw_read(char *buf, size_t num_requested)
> {
>    if (blocking)
>      {
>        WFSO(mutex);
>        NtQueryInformationFile(FilePipeLocalInformation);
>        if (!fpli.ReadDataAvailable
> 	  && num_requested > fpli.InboundQuota / 4)
> 	num_requested = fpli.InboundQuota / 4;
>        NtReadFile(pipe, buf, num_requested);
>        ReleaseMutex(mutex);
>      }
> }
> It's not entirely foolproof, but it should fix 99% of the cases.

I like it!

Do you think there's anything we can or should do to avoid a deadlock in the 
rare cases where this fails?  The only thing I can think of immediately is to 
always impose a timeout if select is called with infinite timeout on the write 
side of a pipe, after which we report that the pipe is write ready.  After all, 
we've lived since 2008 with a bug that caused select to *always* report write ready.

Alternatively, we could just wait and see if there's an actual use case in which 
someone encounters a deadlock.


More information about the Cygwin-developers mailing list