Bug 5990

Summary: running out of file descriptors when linking >1024 files
Product: binutils Reporter: Christopher Faylor <me>
Component: goldAssignee: Ian Lance Taylor <ian>
Status: RESOLVED FIXED    
Severity: normal CC: bug-binutils, simon.hausmann
Priority: P2    
Version: unspecified   
Target Milestone: ---   
Host: Target:
Build: Last reconfirmed:

Description Christopher Faylor 2008-03-27 17:51:07 UTC
When trying to link a binary which takes a large number of files, I get many
errors like this:

ld-new: cannot open ../driver/ic/via/sinai/mlxhh/obj/1/mcgm.o: Too many open files

This is coming from Read_symbols::do_read_symbols.

I can obviously work around the problem by increasing the number of allowed open
file desriptors.
Comment 1 Nick Clifton 2008-03-28 07:26:18 UTC
Subject: Re:  New: running out of file descriptors when linking
 >1024 files

Hi Chris,

> When trying to link a binary which takes a large number of files, I get many
> errors like this:
> 
> ld-new: cannot open ../driver/ic/via/sinai/mlxhh/obj/1/mcgm.o: Too many open files
> 
> This is coming from Read_symbols::do_read_symbols.
> 
> I can obviously work around the problem by increasing the number of allowed open
> file desriptors.

Or you could place the object files into a library.  Or a thin archive...

Cheers
   Nick


Comment 2 Christopher Faylor 2008-05-09 14:39:29 UTC
An archive does solve the problem but I don't think a thin archive would, would
it?  That still has to open each individual file, doesn't it?
Comment 3 Ian Lance Taylor 2008-07-25 04:28:14 UTC
I finally got back to this and committed a patch.  Let me know if you have any
trouble with it.
Comment 4 Simon Hausmann 2008-07-25 10:35:34 UTC
I still see the problem, the error message is different though:

ld: out of file descriptors and couldn't close any
Comment 5 Ian Lance Taylor 2008-07-28 03:32:14 UTC
What does "ulimit -n" show on your system?
Comment 6 Ian Lance Taylor 2008-08-02 20:07:20 UTC
You reopened this bug report.  Can you provide any more information about the
failing case?  As it is, I don't know how to address the problem.  I've tested
it by reducing the ulimit on my machine, and it worked as I expected when 10
file descriptors were available.  I think that is a reasonable requirement.  I
don't know of any Unix system which restricts processes to fewer than 20 file
descriptors.
Comment 7 Simon Hausmann 2008-08-05 16:06:39 UTC
(In reply to comment #6)
> You reopened this bug report.  Can you provide any more information about the
> failing case?  As it is, I don't know how to address the problem.  I've tested
> it by reducing the ulimit on my machine, and it worked as I expected when 10
> file descriptors were available.  I think that is a reasonable requirement.  I
> don't know of any Unix system which restricts processes to fewer than 20 file
> descriptors.

ulimit -n reports 1024 on my machine. The "testcase" I'm trying to build is the Qt port of WebKit,
which has long link line, but I don't think they're more than say 300 object files.

I'll try to provide some more detailed instructions on how to reproduce/build it.

Comment 8 Sourceware Commits 2009-02-28 03:05:23 UTC
Subject: Bug 5990

CVSROOT:	/cvs/src
Module name:	src
Changes by:	ian@sourceware.org	2009-02-28 03:05:08

Modified files:
	gold           : ChangeLog descriptors.cc descriptors.h 

Log message:
	PR 5990
	* descriptors.h (Open_descriptor): Add is_on_stack field.
	* descriptors.cc (Descriptors::open): If the descriptor is on the
	top of the stack, remove it.  Initialize is_on_stack field.
	(Descriptors::release): Only add pod to stack if it is not on the
	stack already.
	(Descriptors::close_some_descriptor): Clear stack_next and
	is_on_stack fields.

Patches:
http://sources.redhat.com/cgi-bin/cvsweb.cgi/src/gold/ChangeLog.diff?cvsroot=src&r1=1.183&r2=1.184
http://sources.redhat.com/cgi-bin/cvsweb.cgi/src/gold/descriptors.cc.diff?cvsroot=src&r1=1.3&r2=1.4
http://sources.redhat.com/cgi-bin/cvsweb.cgi/src/gold/descriptors.h.diff?cvsroot=src&r1=1.3&r2=1.4

Comment 9 Ian Lance Taylor 2009-02-28 03:06:58 UTC
I looked back at this and I saw some bugs in the way I handled the list of file
descriptors.  I committed a patch to fix it.  Hopefully that will address the
issues people have seen.