This is the mail archive of the
mailing list for the Cygwin project.
Re: [Patch] Fixing the PROCESS_DUP_HANDLE security hole.
Christopher Faylor wrote:
> On Sun, Nov 14, 2004 at 12:11:58AM -0500, Christopher Faylor wrote:
> >When I get the code to a point that it can run configure, I'll do a
> >benchmark and see how bad this technique is. If there is not a
> >noticeable degradation, I think I'll probably duplicate the scenario of
> >last year and checkin this revamp which, I believe will eliminate the
> >security problem that you were talking about.
> Well, my initial implementation was a little slower than 1.5.12, which
> was encouraging since there was still stuff that I could do to speed
> things up. For instance, it occurred to me that all of the
> synchronization in spawn_guts and waitsig could go away (with one
> exception), so I got rid of it.
> And, things got a little slower.
> So, I realized that I could get rid of a thread synchronization problem
> by always putting the pinfo for the new process in an static array.
> This also eliminated the need to do anything other than send a signal
> when the child was stopped or terminated.
> I was getting pretty excited about all of the code that was disappearing
> until I ran the benchmark.
> Yep. Things are *a lot* slower now.
> Time for bed and dreams about threads and processes...
I hope you had a great idea!
FWIW, more than 64 processes can also be supported in the current framework
by starting one wait_subproc thread per 63 subprocesses.
The threads can probably all share the same events array (with
event copied (or duplicated?) at 64, 128, ...), passing different
segments to WFMO. The threads would be started on demand, decreasing
the overhead in the usual case where a process does not fork.
It doesn't look complicated at all (pass an argument to the thread
to indicate its segment) , but it won't be as simple as having
one monitoring thread per subprocess and no reparenting.