This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Building consensus over DNSSEC enhancements to glibc.


On 19.11.2015 04:58, Zack Weinberg wrote:
> On 11/18/2015 10:40 AM, Petr Spacek wrote:
>> On 17.11.2015 06:14, Carlos O'Donell wrote:
>>> On 11/16/2015 08:39 PM, Zack Weinberg wrote:
>>>> On 11/16/2015 02:07 PM, Petr Spacek wrote: 
>>>>> Please note that necessary level of trust is different for different DNS
>>>>> record types. For A/AAAA records it is widely accepted that results may be
>>>>> forget and applications are counting with that. For other purposes like TLS
>>>>> certificate validation it is necessary to have full confidence in the data, so
>>>>> validation results has to be properly communicated to the application. The
>>>>> list of "must be secure" record types is changing over time, so it is not
>>>>> feasible to hardcode list to resolver libraries.
>>>>
>>>> Again, applications cannot be trusted to process AD correctly.  The
>>>> correct approach is for the resolver to hardcode a list of records that
>>>> *should* be passed through even if received from an unsigned zone: A,
>>>> AAAA, PTR, MX, SRV, TXT.  I think that's it.
>>>
>>> I agree that applications should not see the AD-bit unless processing the
>>> DNS queries and responses themselves (like my example of openssh does in
>>> my other on-thread response).
>>>
>>> In the case of the glibc APIs my preferred solution for policy-based
>>> whitelist/blacklist of records as you suggest is for the validating
>>> resolver to make those decisions (relaying them back to the application
>>> via the secured channel).
>>
>> Uh, I do not think that any hardcoded list would work. DNS is being extended
>> all the time and often in an unexpected ways so hardcoded lists will cause
>> problems sooner or later, and most probably conflict with RFC 3597 [Handling
>> of Unknown DNS Resource Record Types].
> 
> This does not have to be as difficult as you are making it.
> 
> Unsigned zones are allowed for compatibility only.  New record types do
> not have to work in unsigned zones.  In fact, new record types SHOULD
> NOT[rfc2119] work in unsigned zones, because if they only work in signed
> zones the security considerations become simpler.

Please let me explain why the assumption 'new record types SHOULD NOT[rfc2119]
work in unsigned zones' is not feasible.

I will use couple (counter)examples:
Speaking purely about record types, are you implying that e.g. EIU48 RR type
needs to be signed? Why is that? EUI48 was standardized in 2013, RFC 7043,
well past DNSSEC RFCs, so time is not the good indicator here.

Similarly, we would have to consider that there are RR type ranges defined for
private use. That opens a Pandora Box.

Also, we would have to consider private deployments/DNS in private network
which are not signed on purpose. E.g. because there is some black DNS magic
which auto-generates responses. Or simply because DNSSEC is overkill in
particular scenario.


Let us do one step back:
Limiting RR types on DNS library level has the fundamental problem that there
is simply not enough information to decide what can go though even without
signatures and what has to be stopped.

Only consumers of the DNS data know for what purpose the DNS data will be used
and thus only the consumers of the DNS data know what level of trust is
required. This might even depend on configuration of the consumer.

More examples:
a) Imagine a standard SMTP server configured to do best-effort delivery with
opportunistic/unathenticated encryption.

It does MX record lookup to determine host names of SMTP servers for domain
example.com. This lookup does not need to be DNSSEC-secured because channel
security is opportunistic/unathenticated anyway.


b) Imagine a SMTP server configured to do RFC 7672-authenticated mail transfer
for particular domain and avoid falling back to cleartext.

In this case MX record lookup MUST be DNSSEC-secured. (RFC 7672 section 2.2.1.)


c) Imagine a NTP (or telnet, or ...) client doing SRV record lookup with
intent to discover NTP servers. NTP protocol itself can be/is unprotected so
there is no point in requiring DNSSEC validation for this lookup because
attacker can MitM DNS as well as NTP.


d) Imagine XMPP client doing SRV record lookup with intent to use DNSSEC for
TLS certificate validation. In this case SRV records MUST be DNSSEC-secured.
(RFC 7673 section 4.)


e) Now add CNAME into the mix ...


For reasons described above, I believe that whitelisting cannot be implemented
on DNS library level unless there is a RFC which sorts out all the problems
mentioned above. IMHO such draft has no chance to get though the process.

My personal conclusion is that decision to (use or not to use) data obtained
from DNS needs to be left out to the data consumer. I believe that AD bit
needs to be exposed to consumers because only consumers know what is the
expected/required value of it.

Sure, there will be bugs in applications/consumers, but I do not see a way
around it unless we significantly limit possible uses of DNS in applications.

Petr Spacek  @  Red Hat

> Therefore, glibc can and should hardcode a whitelist of record types
> that are allowed to work in unsigned zones, and we're done.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]