This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

DWARF2 FDE Address Mismatch


I am in the process of porting a new MCU/processor to gcc/gdb.  It has a
a Harvard architecture with a 24 bit code address space (word aligned
instructions) and a 16 bit data address space.  Our toolchain emits ELF
binaries with code and data VMAs based at zero.  The program loads as
though it is a ROM image located entirely in code space.

The setup we have gone for involves having all pointers be 16 bits.
Code pointers actually address "trampolines" to the respective
functions.  Our preferred debug format is DWARF2;  we have therefore set
DWARF2_ADDR_SIZE to 4 in order to correctly represent our full range of
code addresses.  Without this setting, addresses are stored as
POINTER_SIZE / BITS_PER_UNIT == 2 bytes.

The output produced by gcc looks to be correct at this juncture.
However, we have problems loading the DWARF2 info in gdb.  Most notably,
gdb defaults cie->encoding to DW_EH_PE_absptr (which is sizeof(void*) ==
2 bytes).   The encoding can be changed by augmentation, but gcc only
emits this for EH data; DWARF2_ADDR_SIZE applies to non-EH data.

In short it looks like GDB DWARF2 support lacks a mechanism to override
the address size (comparable to DWARF2_ADDR_SIZE in gcc).  Is my
understanding correct?

Regards,
Matt


-- 
Matt Kern
http://www.undue.org/


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]