When trying to link a binary which takes a large number of files, I get many errors like this: ld-new: cannot open ../driver/ic/via/sinai/mlxhh/obj/1/mcgm.o: Too many open files This is coming from Read_symbols::do_read_symbols. I can obviously work around the problem by increasing the number of allowed open file desriptors.
Subject: Re: New: running out of file descriptors when linking >1024 files Hi Chris, > When trying to link a binary which takes a large number of files, I get many > errors like this: > > ld-new: cannot open ../driver/ic/via/sinai/mlxhh/obj/1/mcgm.o: Too many open files > > This is coming from Read_symbols::do_read_symbols. > > I can obviously work around the problem by increasing the number of allowed open > file desriptors. Or you could place the object files into a library. Or a thin archive... Cheers Nick
An archive does solve the problem but I don't think a thin archive would, would it? That still has to open each individual file, doesn't it?
I finally got back to this and committed a patch. Let me know if you have any trouble with it.
I still see the problem, the error message is different though: ld: out of file descriptors and couldn't close any
What does "ulimit -n" show on your system?
You reopened this bug report. Can you provide any more information about the failing case? As it is, I don't know how to address the problem. I've tested it by reducing the ulimit on my machine, and it worked as I expected when 10 file descriptors were available. I think that is a reasonable requirement. I don't know of any Unix system which restricts processes to fewer than 20 file descriptors.
(In reply to comment #6) > You reopened this bug report. Can you provide any more information about the > failing case? As it is, I don't know how to address the problem. I've tested > it by reducing the ulimit on my machine, and it worked as I expected when 10 > file descriptors were available. I think that is a reasonable requirement. I > don't know of any Unix system which restricts processes to fewer than 20 file > descriptors. ulimit -n reports 1024 on my machine. The "testcase" I'm trying to build is the Qt port of WebKit, which has long link line, but I don't think they're more than say 300 object files. I'll try to provide some more detailed instructions on how to reproduce/build it.
Subject: Bug 5990 CVSROOT: /cvs/src Module name: src Changes by: ian@sourceware.org 2009-02-28 03:05:08 Modified files: gold : ChangeLog descriptors.cc descriptors.h Log message: PR 5990 * descriptors.h (Open_descriptor): Add is_on_stack field. * descriptors.cc (Descriptors::open): If the descriptor is on the top of the stack, remove it. Initialize is_on_stack field. (Descriptors::release): Only add pod to stack if it is not on the stack already. (Descriptors::close_some_descriptor): Clear stack_next and is_on_stack fields. Patches: http://sources.redhat.com/cgi-bin/cvsweb.cgi/src/gold/ChangeLog.diff?cvsroot=src&r1=1.183&r2=1.184 http://sources.redhat.com/cgi-bin/cvsweb.cgi/src/gold/descriptors.cc.diff?cvsroot=src&r1=1.3&r2=1.4 http://sources.redhat.com/cgi-bin/cvsweb.cgi/src/gold/descriptors.h.diff?cvsroot=src&r1=1.3&r2=1.4
I looked back at this and I saw some bugs in the way I handled the list of file descriptors. I committed a patch to fix it. Hopefully that will address the issues people have seen.