Bug 10100

Summary: Regression in hsearch_r(): Segmentation fault over internal invariant violation
Product: glibc Reporter: Alexey Khoroshilov <khoroshilov>
Component: libcAssignee: Ulrich Drepper <drepper.fsp>
Severity: normal CC: 2947868523, fweimer, glibc-bugs
Priority: P2 Flags: fweimer: security-
Version: unspecified   
Target Milestone: ---   
Host: Target:
Build: Last reconfirmed:

Description Alexey Khoroshilov 2009-04-24 14:51:33 UTC
The modifications of hsearch_r() (see bug#6966) in glibc-2.9 leads to
segmentation faults in some circumstances as a consequence of internal invariant

Description of the hsearch_r in glibc sources reads:

  We use an trick to speed up the lookup. The table is created by hcreate
  with one more element available. This enables us to use the index zero
  special. *This index will never be used because we store the first hash
  index in the field used where zero means not used.* Every other value
  means used. The used field can be used as a first fast comparison for
  equality of the stored and the parameter value. This helps to prevent
  unnecessary expensive calls of strcmp. 

But the new version stores hash value in the 'used' field instead of table
index, which can be zero. As a result this invariant can be violated and
dereference of uninitialized memory may happen. An example demonstrating the
issue is available here:

Quick fix may be:

   hval = len;
   count = len;
   while (count-- > 0)
       hval <<= 4;
       hval += item.key[count];
+  /* make sure hash value is not zero */
+  if (!hval) hval = 1;
   /* First hash function: simply take the modul but prevent zero. */
   idx = hval % htab->size + 1;

But if zero table index is not used special in the new scheme, why do we need to
allocate extra element? So, long term solution requires some more investigation.
Comment 1 Ulrich Drepper 2009-04-24 18:21:28 UTC
This test got lost in the changes.  Added back.