Bug 22869

Summary: set architecture i8086 has no effect on disassembly
Product: gdb Reporter: Aidan Khoury <aidankhoury>
Component: gdbAssignee: Not yet assigned to anyone <unassigned>
Status: UNCONFIRMED ---    
Severity: critical CC: mat.matans, pedro, rjones
Priority: P2    
Version: 8.1   
Target Milestone: ---   
Host: Target:
Build: Last reconfirmed:

Description Aidan Khoury 2018-02-20 20:29:01 UTC
When specifying the current target architecture as i8086 using `set architecture i8086`, GDB still disassembles in i386 protected mode.

Here is some example code compiled using NASM with BITS 16 specified:

; i386 instruction set
CPU 386

; Text section

; 16 bit code
[BITS 16]

    mov bx, 55AAh
    mov ah, 41h
    int 13h

    mov si, sDriveParamsBuffer
    mov word [si+DRIVE_PARAMS_EXTENDED.BufferSize], 16
    mov ah, 48h
    int 13h

Here is GDB's disassembly output at the correct location, which is clearly being seen as 32 bit protected mode code
even though the target architecture is set for i8086:

(gdb) set architecture i8086
The target architecture is assumed to be i8086
(gdb) show architecture
The target architecture is assumed to be i8086
(gdb) disassemble /r $cs * 16 + $eip,+24
Dump of assembler code from 0x2033d to 0x20355:
   0x0002033d:  bb aa 55 b4 41          mov    ebx,0x41b455aa
   0x00020342:  cd 13                   int    0x13
   0x00020344:  72 78                   jb     0x203be
   0x00020346:  66 be a0 2f             mov    si,0x2fa0
   0x0002034a:  00 00                   add    BYTE PTR [eax],al
   0x0002034c:  c7 04 42 00 b4 48 cd    mov    DWORD PTR [edx+eax*2],0xcd48b400
   0x00020353:  13 72 68                adc    esi,DWORD PTR [edx+0x68]

Here are the current flags I use to configure the GDB project for build: 

./configure --enable-64-bit-bfd --disable-werror --disable-win32-registry --disable-rpath --with-expat --with-zlib --with-lzma --enable-tui
Comment 1 Aidan Khoury 2018-02-20 20:34:01 UTC
Here's a related link, in the comments of the answer made by Michael Petch it's clear that there are others experiencing the same issue:

Comment 2 Pedro Alves 2018-02-21 11:05:31 UTC
Did this every work?  If it did, could someone use "git bisect" to find what caused the regression?
Comment 3 Aidan Khoury 2018-02-25 01:29:52 UTC
(In reply to Pedro Alves from comment #2)
> Did this every work?  If it did, could someone use "git bisect" to find what
> caused the regression?

Is seems to have worked with versions GDB 7.10 and earlier. I have only tested with versions 7.10 and 7.9 though.
Comment 4 Aidan Khoury 2018-02-27 23:11:24 UTC
(In reply to Aidan Khoury from comment #3)
> (In reply to Pedro Alves from comment #2)
> > Did this every work?  If it did, could someone use "git bisect" to find what
> > caused the regression?
> Is seems to have worked with versions GDB 7.10 and earlier. I have only
> tested with versions 7.10 and 7.9 though.

Correction - GDB 7.9* and earlier.
Comment 5 mat.matans 2019-03-19 17:06:35 UTC
I encountered this issue a few day ago and started investigating, this is what I could come up with:

Disclaimer: I only tested this with recent versions of gdb (8+) against qemu, I haven't tested it against a real machine running in real-mod. I assume this is the most common configuration nowadays.

The issue isn't actually purely gdb fault, it's a combination of qemu (or the target in general) declaring itself as i386. Specifically qemu, when you attach to its gdbserver provide a qXfer response for i386 architectures, even though you start in real-mode. The response looks something like this:

>> <?xml version="1.0"?>
>> <!DOCTYPE target SYSTEM "gdb-target.dtd">
>> <target>
>> <architecture>i386</architecture>
>> <xi:include href="i386-32bit.xml"/>
>> </target>
The important part being the architecture tag.

Normally you'd expect "set architecture ..." to overrule the target information, but in the case of i8086 and i386 they are both considered compatible - explained next.

The 'bfd_arch_info's for i8086 and i386 (bfd/cpu-i386.c) both use the 'bfd_i386_compatible' as the 'compatible' function, which is a light wrapper around 'bfd_default_compatible' with some extra handling to avoid mixing x86 and x86_64. The function looks something like this (bfd/archures.c):

>> if (a->arch != b->arch)
>>    return NULL;
>>  if (a->bits_per_word != b->bits_per_word)
>>    return NULL;
>>  if (a->mach > b->mach)
>>    return a;
>>  if (b->mach > a->mach)
>>    return b;
>>  return a;

The idea here is that if 2 architectures share the same 'arch' and are "word-size-compatible", the one with the higher machine arch (mach) is take as a compatible superset of the other. This mostly correct for i8086 and i386, but the default operand size is different.

If we go back to gdb we can see where this "compatibility" is an issue (choose_architecture_for_target in gdb/arch-utils.c):

>> /* BFD's 'A->compatible (A, B)' functions return zero if A and B are
>>      incompatible.  But if they are compatible, it returns the 'more
>>      featureful' of the two arches.  That is, if A can run code
>>      written for B, but B can't run code written for A, then it'll
>>      return A.
>>      Some targets (e.g. MIPS as of 2006-12-04) don't fully
>>      implement this, instead always returning NULL or the first
>>      argument.  We detect that case by checking both directions.  */
>>   compat1 = selected->compatible (selected, from_target);
>>   compat2 = from_target->compatible (from_target, selected);

This function gets called when you issue the "set architecture ..." command, selected is the newly selected arch (i8086) and from_target is the one advertised by the target (qemu). both "compat1" and "compat2" are i386 because it is the superset.

The final piece of the puzzle is how bfd chooses the default operand size (it is not the same as the word size in 'bfd_arch_info'). This happens in "opcodes/i386-dis.c":

>>   ...
>>   else if (info->mach == bfd_mach_i386_i8086)
>>    {
>>      address_mode = mode_16bit;
>>      priv.orig_sizeflag = 0;
>>    }
>>   ...

So only 'bfd_mach_i386_i8086' gets the special default 16bit operand size, and since we lost this information back in 'choose_architecture_for_target' we use 32bit by default. This is actually the only reference to i8086 in the disassembler (i386-dis.c).

bfd actually provides a few mechanisms to use 16bit default operand under i386 with the  "i8086", "addr16" and "data16" flags (-M... in objdump), unfortunately gdb does not allow disassembler options under i386.


I can't decide who is at fault here
* GDB seem to do the sane thing and take the superset arch, but shouldn't 'set architecture' overrule everything?
* qemu is indeed emulating an i386 processor, even though it starts in real-mode.
* bfd - are i8086 and i386 really all that compatible? If they are I wouldn't expect differences


I actually found a workaround for the issue, if you make your own 'target.xml' and the architecture to i8086 GDB will keep this setting.
This is the one I'm using: https://gist.github.com/MatanShahar/1441433e19637cf1bb46b1aa38a90815


I haven't had the chance to test it on gdb <7.9 to see the issue is still there or not, but I doubt it's new, most of the code I went through is 9 years old.
I will hopefully find some time later today