Bug 17326 - endless loop in __reclaim_stacks
Summary: endless loop in __reclaim_stacks
Status: RESOLVED DUPLICATE of bug 26104
Alias: None
Product: glibc
Classification: Unclassified
Component: nptl (show other bugs)
Version: unspecified
: P2 normal
Target Milestone: ---
Assignee: Not yet assigned to anyone
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-08-29 08:06 UTC by ma.jiang
Modified: 2020-06-11 17:58 UTC (History)
4 users (show)

See Also:
Host:
Target:
Build:
Last reconfirmed:
fweimer: security?


Attachments
my fix for the bug (2.23 KB, patch)
2014-08-29 08:06 UTC, ma.jiang
Details | Diff
fix for trunk (2.29 KB, patch)
2014-09-05 07:44 UTC, ma.jiang
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description ma.jiang 2014-08-29 08:06:50 UTC
Created attachment 7766 [details]
my fix for the bug

Hi all,
After the fix mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=477705, I can still reproduce the bug on a dual-core armv7 board. 
As the Linux kernel only guarantee a per-page atomicity when doing fork, just adding a atomic_write_barrier is not enough to protect the stack_used/stack_cache lists. We need to stop threads which tried to modify the lists when a thread is doing fork, only then the child process could get a coherent list, and the __reclaim_stacks could do the right job.
In fork, we have already done such things for io locks(see _IO_list_lock (),_IO_list_resetlock () and _IO_list_unlock () in __libc_fork). I belive we should also add some similar codes to protect the stack_used/stack_cache lists. I have made a patch(see the attachment),  is that ok for trunk?
Comment 1 ma.jiang 2014-09-05 07:44:16 UTC
Created attachment 7771 [details]
fix for trunk
Comment 2 Carlos O'Donell 2020-06-11 17:58:08 UTC
I'm closing this in favour of bug 26104 and contains my analysis. We should discuss the issue in bug 26104. I don't think that adding additional locking is the right solution.
Comment 3 Carlos O'Donell 2020-06-11 17:58:39 UTC
Marking as duplicate of 26104.

*** This bug has been marked as a duplicate of bug 26104 ***