This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH v2] epoll: Support for disabling items, and a self-testapp.
- From: Paolo Bonzini <pbonzini at redhat dot com>
- To: Paul Holland <pholland at adobe dot com>
- Cc: Andy Lutomirski <luto at amacapital dot net>, Andrew Morton <akpm at linux-foundation dot org>, "mtk dot manpages at gmail dot com" <mtk dot manpages at gmail dot com>, Paton Lewis <palewis at adobe dot com>, Alexander Viro <viro at zeniv dot linux dot org dot uk>, Jason Baron <jbaron at redhat dot com>, "linux-fsdevel at vger dot kernel dot org" <linux-fsdevel at vger dot kernel dot org>, "linux-kernel at vger dot kernel dot org" <linux-kernel at vger dot kernel dot org>, Davide Libenzi <davidel at xmailserver dot org>, "libc-alpha at sourceware dot org" <libc-alpha at sourceware dot org>, Linux API <linux-api at vger dot kernel dot org>, "paulmck at linux dot vnet dot ibm dot com" <paulmck at linux dot vnet dot ibm dot com>
- Date: Fri, 19 Oct 2012 15:39:03 +0200
- Subject: Re: [PATCH v2] epoll: Support for disabling items, and a self-testapp.
- References: <CCA6A06A.10264%pholland@adobe.com>
Il 19/10/2012 15:29, Paul Holland ha scritto:
> A disadvantage of solutions in this direction, which was not preset in
> Paton's patch, is that all calls to epoll_wait would need to specify some
> timeout value (!= -1) to guarantee that they each come out of epoll_wait
> and execute the "pass the buck" or "grace_period" logic. So you would
> then have contention between designs that want highly responsive "delete"
> operations (those would require very short timeout values to epoll_wait)
> and those that want low execution overhead (those would want larger
> timeout values).
Is this really a problem? If your thread pool risks getting oversized,
you might need some kind of timeout anyway to expire threads. If your
thread pool is busy, the timeout will never be reached.
I'm not against EPOLL_CTL_DISABLE, just couldn't resist replying to "The
optimal data structure to do this without killing scalability is not
obvious". :)
Paolo