This is the mail archive of the
mailing list for the GDB project.
Re: Patch to handle compressed sections
} Or in the beginning of .debug_info.zlib? (.zdebug_info?)
I thought of that too, but again I didn't see much benefit. On the
other hand, there is a cost (albeit small): now you have a "format"
for compressed data, with a header section and data to follow, so you
need to codify and document the header format, and deal with issues
like 4-byte vs 8-byte and endianness, and then you'll probably want a
version, and things just get very complicated. I like the simplicity
of saying the section is just a blob of compressed data.
I feel if we start needing more data stored with a compressed section
than just its uncompressed length, then we can move to
.debug_info.zlib and write a header format to encompass all that data,
at that time. But it's not really necessary to do that now. (And,
honestly, I'd like to make it difficult to go that route, since again
it adds complexity to what currently seems like a pretty simple
} I've been looking at this and wondering if block compression makes
} more sense; that would let an optimized consumer keep less than the
} whole decompressed contents in memory.
The current algorithm supports block compression -- if the section
consists of several compressed blocks appended together, it will
recognize that and correctly decompress.
It's true that right now we always decompress all the blocks when we
read a section, but one could imagine that instead we keep track of
this info on a per-block basis, and only decompress a given block at
need. I think that would complicate the logic significantly, though,
and don't think it's warranted for this version 1.0 patch. The
important thing, I think, is that this patch doesn't close off making
such an optimization in the future.