Threat model for GNU Binutils
Siddhesh Poyarekar
siddhesh@gotplt.org
Fri Apr 14 14:08:17 GMT 2023
On 2023-04-14 09:12, Richard Earnshaw wrote:
> OK, I think it's time to take a step back.
>
> If we are to have a security policy, I think we first need a threat
> model. Without it, we can't really argue about what we're trying to
> protect against.
>
> So the attached is my initial stab at trying to write down a threat
> model. Some of this is subjective, but I'm trying to be reasonably
> realistic. Most of these threats are really quite low in comparison to
> other tools and services that run on your computer.
>
> In practice, you then take the model and the impact/likelihood matrix
> and decide what level of actions are needed for each combination -
> whether it be from pre-emptive auditing through fixing bugs if found
> down to do nothing. But that's the step after we have the model agreed.
>
> If you can think of threats I've missed (quite likely, I haven't thought
> about this for long enough), then please suggest additions.
I assume you're proposing that this be added to SECURITY.md or similar?
There are overlaps with what we intend for the first part of SECURITY.md.
> Threat model for GNU Binutils
> =============================
>
> The following potential security threats have been identified in GNU
> Binutils. Note that this does not mean that such a vulnerability is
> known to exist.
A threat model should define the nature of inputs because that makes the
difference between something being considered a security threat vs being
a regular bug.
> Threats arising from execution of the GNU Binutils programs
> -----------------------------------------------------------
>
> 1) Privilege escalation.
>
> Nature:
> A bug in the tools allows the user to gain privileges that they did not
> already have.
>
> Likelihood: Low - tools do not run with elevated privileges, so this
> would most likely involve a bug in the kernel.
A more general threat is crossing of privilege boundaries, which is not
only user -> root but user1 -> user2. So this won't necessarily involve
kernel bugs.
> Impact: Critical
Impact for security issues is done on a bug by bug basis, so stating
impact doesn't really make sense.
>
> Mitigation: None
Sandboxing is the answer for everything :)
> 2) Denial of service
>
> Nature:
> A bug in the tools leads to resources in the system becoming
> unavailable on a temporary or permanent basis
The answer here changes based on whether the input is trusted or not.
>
> Likelihood: Low
>
> Impact: Low - tools are normally run under local user control and
> not as daemons.
>
> Mitigation: sandboxing if access to the tools from a third party is
> needed (eg a web service).
>
> 3) Data corruption leads to uncontrolled program execution.
>
> Nature:
> A bug such as unconstrained buffer overflow could lead to a ROP or JOP
> style attack if not fully contained. Once in control an attacker
> might be able to access any file that the user running the program has
> access to.
Likewise.
>
> Likelihood: Moderate
>
> Impact: High
>
> Mitigation: sandboxing can help if an attacker has direct control
> over inputs supplied to the tools or in cases where the inputs are
> particularly untrustworthy, but is not practical during normal
> usage.
>
> Threats arising from execution of output produced by GNU Binutils programs
> --------------------------------------------------------------------------
>
> Note for this category we explicitly exclude threats that exist in the
> input files supplied to the tools and only consider threats introduced
> by the tools themselves.
>
> 1) Incorrect generation of machine instructions leads to unintended
> program behavior.
>
> Nature:
> Many architectures have 'don't care' bits in the machine instructions.
> Generally the architecture will specify the value that such bits have,
> leaving room for future expansion of the instruction set. If tools do
> not correctly set these bits then a program may execute correctly on
> some machines, but fail on others.
>
> Likelihood: Low
>
> Impact: Moderate - this is unlikely to lead to an exploit, but might lead
> to DoS in some cases.
The impact in this case is context dependent, so the impact will vary
based on other factors, such as whether a PoC is available, how common
the vulnerable code pattern would be, etc.
>
> Mitigation: cross testing generated output against third-party toolchain
> implementations.
>
> 2) Code directly generated by the tools contains a vulnerability
>
> Nature:
> The vast majority of code output from the tools comes from the input
> files supplied, but a small amount of 'glue' code might be needed in
> some cases, for example to enable jumping to another function in
> another part of the address space. Linkers are also sometimes asked
> to inject mitigations for known CPU errata when this cannot be done
> during the compilation phase.
Since you've split this one out from machine instructions, there's a
third category too; where binutils tools generate incorrect code for
alignment of sections, sizes of sections, etc. There's also a (rare)
possibility of an infrequently used instruction having incorrect opcode
mapping, resulting in a bug being masked when dumped with objdump or
resulting code having undefined behaviour.
>
> Likelihood: low
>
> Impact: mostly low - the amount of code generated is very small and
> unlikely to involve buffers that contain risky data, so the chances of
> this directly leading to a vulnerability is low.
>
> Mitigation: monitor for processor vendor vulnerabilities and adjust tool
> code generation if needed.
Sid
More information about the Gdb
mailing list