Jakub Jelinek [Tue, 29 May 2012 17:00:58 +0000 (19:00 +0200)]
Implement special low_mem mode, which roughly halves amount of
needed RAM at the expense of small slowdown on very large
inputs (more than -l COUNT DIEs). Give up if input has more than
-L COUNT DIEs.
Jakub Jelinek [Wed, 23 May 2012 10:35:47 +0000 (12:35 +0200)]
Save some more memory by removing die_cu fields from struct dw_die.
Instead, the compile_unit/partial_unit die is tagged with die_root
bit, and die_parent is then a struct dw_cu pointer rather than
die pointer (before it was NULL). To find a cu for any die
a new die_cu inline has been added.
Jakub Jelinek [Fri, 18 May 2012 19:15:04 +0000 (21:15 +0200)]
Save memory during optimize_multifile by collapsing non-referenced
children of toplevel dies and expanding them again if needed
for die_eq_1 (anything that gets expanded will be kept expanded
afterwards).
Jakub Jelinek [Fri, 18 May 2012 10:38:33 +0000 (12:38 +0200)]
Add a hashtable for abbrev offsets to decrease amount of abbrev_tag
alloced memory and decrease number of read_abbrev calls if
abbrev offsets in CUs don't increase monotonically (common for
dwz adjusted/created CUs).
Jakub Jelinek [Thu, 17 May 2012 14:49:27 +0000 (16:49 +0200)]
Don't make distinction between former PUs and former CUs, put length
7 CU markers just after each DSO/executable chunk of CUs. For
non-op_multifile mode use cu_chunk as a CU counter rather than keeping
it at 0.
Jakub Jelinek [Thu, 3 May 2012 19:36:36 +0000 (21:36 +0200)]
- cache read_debug_line results to decrease memory usage
- put DIEs that won't be needed again into freelist, allocate from that
- clear die_op_type_referenced during wr_multifile
- use ELF_C_WRITE instead of ELC_C_WRITE_MMAP in elf_begin
Jakub Jelinek [Thu, 3 May 2012 12:09:57 +0000 (14:09 +0200)]
During partition_dups, force together with anything that is worthwhile
to put into PUs also any DIEs that are referenced by those (even indirectly),
and any DIEs that are dups and referenced by the same set of CUs.
This way PUs should no longer DW_FORM_ref_addr to CU DIEs, unless they
were using DW_FORM_ref_addr already before.
Jakub Jelinek [Thu, 3 May 2012 12:09:57 +0000 (14:09 +0200)]
During partition_dups, force together with anything that is worthwhile
to put into PUs also any DIEs that are referenced by those (even indirectly),
and any DIEs that are dups and referenced by the same set of CUs.
This way PUs should no longer DW_FORM_ref_addr to CU DIEs, unless they
were using DW_FORM_ref_addr already before.
Jakub Jelinek [Mon, 30 Apr 2012 14:17:25 +0000 (16:17 +0200)]
First step towards multifile DWARF optimizations.
While handling input files normally, this gathers interesting
data in temporary files for .debug_{info,abbrev,line,str} sections,
then right now skips optimization pass on the gathered sections
and writes the result into the filename given by -m option.
Jakub Jelinek [Thu, 26 Apr 2012 09:20:15 +0000 (11:20 +0200)]
Put new_data and new_size info about debug sections directly into
debug_sections array, to simplify write_dso and not passing around
too many pointers and sizes.
Jakub Jelinek [Wed, 25 Apr 2012 16:53:44 +0000 (18:53 +0200)]
Optimize DW_AT_high_pc DW_FORM_addr for DWARF4+ into constant
class form DW_AT_high_pc (which is the size of the entity rather than
address of the first byte after it).
Jakub Jelinek [Fri, 20 Apr 2012 16:47:46 +0000 (18:47 +0200)]
IMPORTANT: usage change, no works similarly to strip.
dwz file
will modify file in place (well, write to temporary, rename over),
dwz -o outfile file
will do what dwz file outfile did before.
dwz file1 file2 file3
will modify all the files in place,
dwz
will modify a.out in place,
dwz -o b.out
will modify a.out into b.out.
In addition to that this change revamps the OOM handling to just
not handle the single currently handled file, complain,
cleanup all memory and continue with the next file.
Jakub Jelinek [Fri, 20 Apr 2012 07:06:59 +0000 (09:06 +0200)]
Second part of memory management cleanups. For permanently
(well, for the duration of handling a single file) allocated objects
use a big pool allocator that doesn't unnecessarily align everything
to 16 bytes and allocates in ~ 16MB chunks rather than 4KB.
This decreases memory usage a little bit on large testcases and
speeds things a tiny bit up.
Also, avoid obstacks local to function, for memory allocation failures
if in the future we use longjmp those might not be freed.
Jakub Jelinek [Thu, 19 Apr 2012 11:43:17 +0000 (13:43 +0200)]
Use just u.p1.die_ref_hash instead of u.p1.die_hash ^ u.p1.die_ref_hash
as hash in the dup_htab hash table. die_ref_hash has die_hash iteratively
hashed in already, and on leaf DIEs is equal to die_hash, so the xor
resulted in 0 for all leaf DIEs.