This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC v3 03/23] sysdeps/wait: Use waitid if avaliable


Arnd Bergmann <arnd@arndb.de> writes:

> On Thu, Jul 25, 2019 at 6:40 AM Rich Felker <dalias@libc.org> wrote:
>> On Wed, Jul 24, 2019 at 05:04:53PM -0700, Alistair Francis wrote:
>> > On Tue, Jul 23, 2019 at 1:45 AM Arnd Bergmann <arnd@arndb.de> wrote:
>> > >
>> > > Sounds good to me, the debate over what rusage to use should not hold
>> > > up the review of the rest of that syscall.
>> >
>> > I'm unclear what the final decision is here. What is the solution are
>> > we going to have wait4() or add P_PROCESS_PGID to waitid()?
>> >
>> > As well as that what is the solution to current implementations? If we
>> > add wait4() then there isn't an issue (and I can drop this patch) but
>> > if we add P_PROCESS_PGID then we will need a way to handle kernels
>> > with waitid() but no P_PROCESS_PGID. Although my new plan is to only
>> > use the waitid syscall if we don't have waitpid or wait4 so it seems
>> > like this will only affect RV32 for the time being.
>>
>> I would really like some indication which solution will be taken,
>> since it impacts choices that will need to be made in musl very soon.
>> My favorite outcome would be bringing back wait4 for rv32 (and
>> no-time32 archs in general) *and* adding P_PROCESS_PGID. In the short
>> term, just using wait4 would be the simplest and cleanest for us (same
>> as all other archs, no extra case to deal with), but in the long term
>> there may be value in having rusage that can represent more than 68
>> cpu-years spent by a process (seems plausible with large numbers of
>> cores).
>
> Based on the feedback from Linus and Eric, the most likely outcome
> at the moment seems to be an extension of waitid() to allow
> P_PGID with id=0 like BSD does, and not bring back wait4() or
> add P_PROCESS_PGID.
>
> So far, I don't think anyone has proposed an actual kernel patch.
> I was hoping that Eric would do it, but I could also send it if he's
> otherwise busy.

So here is what I am looking at.  It still needs to be tested
and the description needs to be improved so that it properly credits
everyone.  However I think I have the major stroeks correct.

From: "Eric W. Biederman" <ebiederm@xmission.com>
Date: Tue, 23 Jul 2019 07:44:46 -0500
Subject: [PATCH] waitid: Add support for waiting for the current process group

It was recently discovered that the linux version of waitid is not a
superset of the other wait functions because it does not include
support for waiting for the current process group.  This has two
downsides.  An extra system call is needed to get the current process
group, and a signal could come in between the system call that
retrieved the process gorup and the call to waitid that changes the
current process group.

Allow userspace to avoid both of those issues by defining
idtype == P_PGID and id == 0 to mean wait for the caller's process
group at the time of the call.

Arguments can be made for using a different choice of idtype and id
for this case but the BSDs already use this P_PGID and 0 to indicate
waiting for the current process's process group.  So be nice to user
space programmers and don't introduce an unnecessary incompatibility.

Some people have noted that the posix description is that
waitpid will wait for the current process group, and that in
the presence of pthreads that process group can change.  To get
clarity on this issue I looked at XNU, FreeBSD, and Luminos.  All of
those flavors of unix waited for the current process group at the
time of call and as written could not adapt to the process group
changing after the call.

At one point Linux did adapt to the current process group changing but
that stopped in 161550d74c07 ("pid: sys_wait... fixes").  It has been
over 11 years since Linux has that behavior, no programs that fail
with the change in behavior have been reported, and I could not
find any other unix that does this.  So I think it is safe to clarify
the definition of current process group, to current process group
at the time of the wait function.

Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
---
 kernel/exit.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/exit.c b/kernel/exit.c
index a75b6a7f458a..3d86930f035e 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -1577,14 +1577,16 @@ static long kernel_waitid(int which, pid_t upid, struct waitid_info *infop,
 		break;
 	case P_PGID:
 		type = PIDTYPE_PGID;
-		if (upid <= 0)
+		if (upid < 0)
 			return -EINVAL;
+		if (upid == 0)
+			pid = get_pid(task_pgrp(current));
 		break;
 	default:
 		return -EINVAL;
 	}
 
-	if (type < PIDTYPE_MAX)
+	if ((type < PIDTYPE_MAX) && !pid)
 		pid = find_get_pid(upid);
 
 	wo.wo_type	= type;
-- 
2.21.0.dirty


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]