What is a security bug?
Most security vulnerabilities in the GNU C Library materialize only after an application uses functionality in a specific way. Therefore, it is sometimes difficult to determine if a defect in the GNU C Library constitutes a vulnerability as such. The follow guidelines can help with a decision.
- Buffer overflows should be treated as security bugs if it is conceivable that the data triggering them can come from an untrusted source.
- Other bugs that cause memory corruption which is likely exploitable should be treated as security bugs.
- Information disclosure can be security bugs, especially if exposure through applications can be determined.
- Memory leaks and races are security bugs if they cause service breakage.
- Stack overflow through unbounded alloca calls or variable-length arrays are security bugs if it is conceivable that the data triggering the overflow could come from an untrusted source.
- Stack overflow through deep recursion and other crashes are security bugs if they cause service breakage.
- Bugs that cripple the whole system (so that it doesn't even boot or does not run most applications) are not security bugs because they will not be exploitable in practice, due to general system instability.
Bugs that crash nscd are generally security bugs, except if they can only be triggered by a trusted data source (DNS is not trusted, but NIS and LDAP probably are).
Security Exceptions describes subsystems for which determining the security status of bugs is especially complicated.
For consistency, if the bug has received a CVE name attributing it to the GNU C library, it should be flagged security+.
Duplicates of security bugs (flagged with security+) should be flagged security-, to avoid cluttering the reporting.
In this context, "service breakage" means client-side privilege escalation (code execution) or server-side denial of service or privilege escalation through actual, concrete, non-synthetic applications. Or put differently, if the GNU C Library causes a security bug in an application (and the application uses the library in a standard-conforming manner or according to the manual), the GNU C Library bug should be treated as security-relevant.
Reporting private security bugs
All bugs reported in Bugzilla are public.
As a rule of thumb, security vulnerabilities which are exposed over the network or can be used for local privilege escalation (through existing applications, not synthetic test cases) should be reported privately. We expect that such critical security bugs are rare, and that most security bugs can be reported in Bugzilla, thus making them public immediately. If in doubt, you can file a private bug, as explained in the next paragraph.
If you want to report a private security bug that is not immediately public, please contact one of our downstream distributions with security teams. The follow teams have volunteered to handle such bugs:
Please report the bug to just one of these teams. It will be shared with other teams as necessary.
The team you contacted will take care of details such as vulnerability rating and CVE assignment. It is likely that the team will ask to file a public bug because the issue is sufficiently minor and does not warrant an embargo. An embargo is not a requirement for being credited with the discovery of a security vulnerability.
Reporting public security bugs
As mentioned under reporting private security bugs we expect that critical security bugs are rare, and that most security bugs can be reported in Bugzilla, thus making them public immediately. When reporting public security bugs the reporter should provide rationale for their choice of public disclosure.
Triaging security bugs
This section is aimed at developers, not reporters.
Security-relevant bugs should be marked with security+, as per the Bugzilla security flag documentation, following the guidelines above. If you set the security+ flag, you should make sure the following information is included in the bug (usually in a bug comment):
- The first glibc version which includes the vulnerable code. If the vulnerability was introduced before glibc 2.4 (released in 2006), this information is not necessary.
- The commit or commits (identified by hash) that fix this vulnerability in the master branch, and (for historic security bugs) the first release that includes this fix.
- The summary should include the CVE names (if any), in parentheses at the end.
- If there is a single CVE name assigned to this bug, it should be set as an alias.
The following links are helpful for finding untriaged bugs:
Fixing security bugs
For changes to master, the regular consensus-driven process must be followed. It makes sense to obtain consensus in private, to ensure that the patch is likely in a committable state, before disclosing an emboargoed vulnerability.
Security backports to release branches need to follow the release process.
Contact the website maintainers and have them draft a news entry for the website frontpage to direct users to the bug, the fix, or the mailing list discussions.
Security bugs flagged with security+ should have CVE identifiers.
For bugs which are public (thus all bugs in Bugzilla), CVE assignment has to happen through the oss-security mailing list. (Downstreams will eventually request CVE assignment through their public Bugzilla monitoring processes.)
For initially private security bugs, CVEs will be assigned as needed by the downstream security teams. Once a public bug is filed, the name should be included in Bugzilla.