This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Implement IPv6 support for GDB/gdbserver


On 06/08/2018 02:13 AM, Sergio Durigan Junior wrote:
> On Wednesday, June 06 2018, Pedro Alves wrote:


>>> Another thing worth mentioning is the new 'GDB_TEST_IPV6' testcase
>>> parameter, which instructs GDB and gdbserver to use IPv6 for
>>> connections.  This way, if you want to run IPv6 tests, you do:
>>>
>>>   $ make check-gdb RUNTESTFLAGS='GDB_TEST_IPV6=1'
>>
>> That sounds useful, but:
>>
>> #1 - I don't see how that works without also passing
>>      --target_board= pointing at one of the native-gdbserver and
>>      native-extended-gdbserver board files.  
>>      Can you expand on why you took this approach instead of:
>>  
>>   a) handling GDB_TEST_IPV6 somewhere central, like
>>      in gdb/testsuite/gdbserver-support.exp, where we
>>      default to "localhost:".  That would exercise the gdb.server/
>>      tests with ipv6, when testing with the default/unix board file.
>>
>>   b) add new board files to test with ipv6, like native-gdbserver-v6
>>      or something like that.
>>
>>   c) both?
> 
> I was thinking about a good way to test this feature, and my initial
> assumption was that the test would only make sense when --target-board=
> is passed.  That's why I chose to implement the mechanism on
> gdb/testsuite/boards/gdbserver-base.exp.  Now that you mentioned this, I
> noticed that I should have also mentioned these expectations while
> writing the commit message, and that the "make check-gdb
> RUNTESTFLAGS='GDB_TEST_IPV6=1'" is actually wrong because it doesn't
> specify any of the target boards.
> 
> Having said that, and after reading your question, I understand that the
> testing can be made more flexible by implementing the logic inside
> gdb/testsuite/gdbserver-support.exp instead, which will have the benefit
> of activating the test even without a gdbserver target board being
> specified.  I will give it a try and see if I can implement it in a
> better way.

I'd think you just have to hook the GDB_TEST_LOCALHOST env var reading here,
in gdbserver_start:

    # Extract the local and remote host ids from the target board struct.
    if [target_info exists sockethost] {
	set debughost [target_info sockethost]
    } else {
	set debughost "localhost:"
    }

I'd also try removing the

  set_board_info sockethost "localhost:"

line from native-gdbserver.exp and native-extended-gdbserver.exp,
since that's the default.  But it's not really necessary if 
the env var takes precedence of the target board setting.

>> Does connecting with "localhost6:port" default to IPv6, BTW?
>> At least fedora includes "localhost6" in /etc/hosts.
> 
> Using "localhost6:port" works, but it doesn't default to IPv6.  Here's
> what I see on the gdbserver side:
> 
>   $ ./gdb/gdbserver/gdbserver --once localhost6:1234 a.out
>   Process /path/to/a.out created; pid = 7742
>   Listening on port 1234
>   Remote debugging from host ::ffff:127.0.0.1, port 39196
> 
> This means that the connection came using IPv4; it works because IPv6
> sockets also listen for IPv4 connection on Linux (one can change this
> behaviour by setting the "IPV6_V6ONLY" socket option).
> 
> This happens because I've made a decision to default to AF_INET (instead
> of AF_UNSPEC) when no prefix has been given.  This basically means that,
> at least for now, we assume that an unknown (i.e., not prefixed)
> address/hostname is IPv4.  I've made this decision thinking about the
> convenience of the user: when AF_UNSPEC is used (and the user hasn't
> specified any prefix), getaddrinfo will return a linked list of possible
> addresses that we should try to connect to, which usually means an IPv6
> and an IPv4 address, in that order.  Usually this is fine, because (as I
> said) IPv6 sockets can also listen for IPv4 connections.  However, if
> you start gdbserver with an explicit IPv4 address:
> 
>   $ ./gdb/gdbserver/gdbserver --once 127.0.0.1:1234 a.out
> 
> and try to connect GDB to it using an "ambiguous" hostname:
> 
>   $ ./gdb/gdb -ex 'target remote localhost:1234' a.out
> 
> you will notice that GDB will take a somewhat long time trying to
> connect (to the IPv6 address, because of AF_UNSPEC), and then it will
> error out saying that the connection timed out:
> 
>   tcp:localhost:1234: Connection timed out.

How do other tools handle this?  For example, with ping, I get:

 $ ping localhost
 PING localhost.localdomain (127.0.0.1) 56(84) bytes of data.
 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.048 ms
 ^C

 $ ping localhost6
 PING localhost6(localhost6.localdomain6 (::1)) 56 data bytes
 64 bytes from localhost6.localdomain6 (::1): icmp_seq=1 ttl=64 time=0.086 ms
 ^C

how does ping instantly know without visible delay that "localhost"
resolves to an IPv4 address, and that "localhost6" resolves to
an IPv6 address?

Same with telnet:

 $ telnet localhost
 Trying 127.0.0.1...
 telnet: connect to address 127.0.0.1: Connection refused
 $ telnet localhost6
 Trying ::1...
 telnet: connect to address ::1: Connection refused

Same with netcat:

 $ nc -vv localhost
 Ncat: Version 7.60 ( https://nmap.org/ncat )
 NCAT DEBUG: Using system default trusted CA certificates and those in /usr/share/ncat/ca-bundle.crt.
 NCAT DEBUG: Unable to load trusted CA certificates from /usr/share/ncat/ca-bundle.crt: error:02001002:system library:fopen:No such file or directory
 libnsock nsock_iod_new2(): nsock_iod_new (IOD #1)
 libnsock nsock_connect_tcp(): TCP connection requested to 127.0.0.1:31337 (IOD #1) EID 8
                                                           ^^^^^^^^^
 libnsock nsock_trace_handler_callback(): Callback: CONNECT ERROR [Connection refused (111)] for EID 8 [127.0.0.1:31337]
 Ncat: Connection refused.

 $ nc -vv localhost6
 Ncat: Version 7.60 ( https://nmap.org/ncat )
 NCAT DEBUG: Using system default trusted CA certificates and those in /usr/share/ncat/ca-bundle.crt.
 NCAT DEBUG: Unable to load trusted CA certificates from /usr/share/ncat/ca-bundle.crt: error:02001002:system library:fopen:No such file or directory
 libnsock nsock_iod_new2(): nsock_iod_new (IOD #1)
 libnsock nsock_connect_tcp(): TCP connection requested to ::1:31337 (IOD #1) EID 8
                                                           ^^^
 libnsock nsock_trace_handler_callback(): Callback: CONNECT ERROR [Connection refused (111)] for EID 8 [::1:31337]
 Ncat: Connection refused.
[

BTW, I think a much more common scenario of local use of
gdbserver is to omit the host name:

 ./gdb/gdbserver/gdbserver --once :1234
 ./gdb/gdb -ex 'target remote :1234'

I assume that would work fine with AF_UNSPEC ?

> 
> This is because of the auto-retry mechanism implemented for TCP
> connections on GDB; it keeps retrying to connect to the IPv6 until it
> decides it's not going to work.  Only after this timeout is that GDB
> will try to connect to the IPv4 address, and succeed.
> 
> So, the way I see it, we have a few options to deal with this scenario:
> 
> 1) Assume that the unprefixed address/hostname is AF_INET (i.e., keep
> the patch as-is).
> 
> 2) Don't assume anything about the unprefixed address/hostname (i.e.,
> AF_UNSPEC), and don't change the auto-retry system.  This is not very
> nice because of what I explained above.
> 
> 3) Don't assume anything about the unprefixed address/hostname (i.e.,
> AF_UNSPEC), but *DO* change the auto-retry system to retry less times
> (currently it's set to 15 retries, which seems too much to me).  Maybe 5
> times is enough?  This will still have an impact on the user, but she
> will have to wait less time, at least.
> 
> Either (1) or (3) are fine by me.  If we go with (1), we'll eventually
> need to change the default to IPv6 (or to AF_UNSPEC), but that's only
> when IPv6 is more adopted.

I'd like to understand this a bit more before coming up with a
decision.  I feel like we're missing something.

A part of it is that it kind of looks like a "it hurts when I do this
doctor; then just don't" scenario, with the using different host names
on both gdbserver and gdb (localhost vs 127.0.0.1).  Why would you do that
for local debugging?  You'd get the same problem if localhost
always only resolved to an IPv6 address, I'd think.  But still, I'd
like to understand how can other tools handle this.

>>> +  char *orig_name = strdup (name);
>>
>> Do we need a deep copy?  And if we do, how about
>> using std::string to avoid having to call free further
>> down?
> 
> This is gdbserver/gdbreplay.c, where apparently we don't have access to
> a lot of our regular facilities on GDB.  For example, I was trying to
> use std::string, its methods, and other stuff here (even i18n
> functions), but the code won't compile, and as far as I have researched
> this is intentional, because gdbreplay needs to be a very small and
> simple program.  

What did you find that gave you that impression?  There's no reason
that gdbreplay needs to be small or simple.  Certainly doesn't need
to be smaller than gdbserver.

> at least that's what I understood from our
> archives/documentation.  I did not feel confident reworking gdbreplay to
> make it "modern", so I decided to implement things "the old way".

Seems like adding to technical debt to be honest.  Did you hit some
unsurmountable problem, or would just a little bit of fixing here and
there be doable?

Thanks,
Pedro Alves


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]