[PATCH 1/2] i386: Generate lfence with load/indirect branch/ret [CVE-2020-0551]

Jan Beulich jbeulich@suse.com
Tue Apr 21 06:30:16 GMT 2020


On 21.04.2020 04:24, Hongtao Liu wrote:
> On Mon, Apr 20, 2020 at 3:34 PM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 20.04.2020 09:20, Hongtao Liu wrote:
>>> On Thu, Apr 16, 2020 at 4:33 PM Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 16.04.2020 07:34, Hongtao Liu wrote:
>>>>> @@ -4506,6 +4520,22 @@ insert_lfence_after (void)
>>>>> {
>>>>>   if (lfence_after_load && load_insn_p ())
>>>>>     {
>>>>> +      /* Insert lfence after rep cmps/scas only under
>>>>> +       -mlfence-after-load=all.  */
>>>>> +      if (((i.tm.base_opcode | 0x1) == 0xa7
>>>>> +         || (i.tm.base_opcode | 0x1) == 0xaf)
>>>>> +        && i.prefix[REP_PREFIX])
>>>>
>>>> I'm afraid I don't understand why the REP forms need treating
>>>> differently from the non-REP ones of the same insns.
>>>>
>>>
>>> Not all REP forms, just REP CMPS/SCAS which would change EFLAGS.
>>
>> Well, of course just the two. But this doesn't answer my question
>> as to why there is such a special case.
>>
> 
> There are also two REP string instructions that require special
> treatment. Specifically, the compare string (CMPS) and scan string
> (SCAS) instructions set EFLAGS in a manner that depends on the data
> being compared/scanned. When used with a REP prefix, the number of
> iterations may therefore vary depending on this data. If the data is a
> program secret chosen by the adversary using an LVI method, then this
> data-dependent behavior may leak some aspect of the secret. The
> solution is to unfold any REP CMPS and REP SCAS operations into a loop
> and insert an LFENCE after the CMPS/SCAS instruction. For example,
> REPNZ SCAS can be unfolded to:
> 
> .RepLoop:
>   JRCXZ .ExitRepLoop
>   DEC rcx  # or ecx if the REPNZ SCAS uses a 32-bit address size
>   SCAS
>   LFENCE
>   JNZ .RepLoop
> .ExitRepLoop:
>   ...
> 
> The request i get is to add options to handle or not handle REP
> CMPS/SCAS also plus issue a warning.

But you don't handle them as per what you've written above, afaics.
Am I overlooking anything?

> @@ -647,7 +656,8 @@ static enum lfence_before_ret_kind
>    {
>      lfence_before_ret_none = 0,
>      lfence_before_ret_not,
> -    lfence_before_ret_or
> +    lfence_before_ret_or,
> +    lfence_before_ret_shl
>    }
>  lfence_before_ret;
> 
> @@ -4350,22 +4360,28 @@ load_insn_p (void)
> 
>    if (!any_vex_p)
>      {
> -      /* lea  */
> -      if (i.tm.base_opcode == 0x8d)
> +      /* Anysize insns: lea, invlpg, clflush, prefetchnta, prefetcht0,
> + prefetcht1, prefetcht2, prefetchtw, bndmk, bndcl, bndcu, bndcn,
> + bndstx, bndldx, prefetchwt1, clflushopt, clwb, cldemote.  */

Bad indentation (also elsewhere, so this may be an issue with your
mail client)?

> @@ -4536,8 +4577,8 @@ insert_lfence_before (void)
> 
>        if (i.reg_operands == 1)
>   {
> -   /* Indirect branch via register.  Don't insert lfence with
> -      -mlfence-after-load=yes.  */
> +   /* Indirect branch via register. Insert lfence when
> +      -mlfence-after-load=none.  */
>     if (lfence_after_load
>         || lfence_before_indirect_branch == lfence_branch_memory)
>       return;

The changed comment is awkward to read - the reader will almost
certainly wonder why "none" implies an action. I think you either
want to explain this further, or revert back to the original form
by simply making it "... with -mlfence-after-load={all,general}."

> @@ -4568,12 +4609,13 @@ insert_lfence_before (void)
>        return;
>      }
> 
> -  /* Output or/not and lfence before ret.  */
> +  /* Output or/not/shl and lfence before ret/lret/iret.  */
>    if (lfence_before_ret != lfence_before_ret_none
>        && (i.tm.base_opcode == 0xc2
>     || i.tm.base_opcode == 0xc3
>     || i.tm.base_opcode == 0xca
> -   || i.tm.base_opcode == 0xcb))
> +   || i.tm.base_opcode == 0xcb
> +   || i.tm.base_opcode == 0xcf))
>      {
>        if (last_insn.kind != last_insn_other
>     && last_insn.seg == now_seg)
> @@ -4583,33 +4625,50 @@ insert_lfence_before (void)
>   last_insn.name, i.tm.name);
>     return;
>   }
> -      if (lfence_before_ret == lfence_before_ret_or)
> - {
> -   /* orl: 0x830c2400.  */
> -   p = frag_more ((flag_code == CODE_64BIT ? 1 : 0) + 4 + 3);
> -   if (flag_code == CODE_64BIT)
> -     *p++ = 0x48;
> -   *p++ = 0x83;
> -   *p++ = 0xc;
> -   *p++ = 0x24;
> -   *p++ = 0x0;
> - }
> -      else
> +
> +      char prefix = i.prefix[DATA_PREFIX] && !(i.prefix[REX_PREFIX] & REX_W)
> + ? 0x66 : flag_code == CODE_64BIT ? 0x48 : 0x0;

While this now looks better, it's tailored to near RET. Far RET
as well as IRET default to 32-bit operand size in 64-bit mode.
I can't tell how relevant it is to match effective operand size
of the guarded and guarding insns.

> +      if (lfence_before_ret == lfence_before_ret_not)
>   {
> -   p = frag_more ((flag_code == CODE_64BIT ? 2 : 0) + 6 + 3);
> -   /* notl: 0xf71424.  */
> -   if (flag_code == CODE_64BIT)
> -     *p++ = 0x48;
> +   /* notl: 0xf71424, may add prefix
> +      for operand size overwrite or 64-bit code.  */

Despite the comment extension you still say "notl". Please switch
toi either just "not" or something like "not{w,l,q}". Also
s/overwrite/override/. Note how you ...

> +   p = frag_more ((prefix ? 2 : 0) + 6 + 3);
> +   if (prefix)
> +     *p++ = prefix;
>     *p++ = 0xf7;
>     *p++ = 0x14;
>     *p++ = 0x24;
> -   /* notl: 0xf71424.  */
> -   if (flag_code == CODE_64BIT)
> -     *p++ = 0x48;
> +   if (prefix)
> +     *p++ = prefix;
>     *p++ = 0xf7;
>     *p++ = 0x14;
>     *p++ = 0x24;
>   }
> +      else
> + {
> +   p = frag_more ((prefix ? 1 : 0) + 4 + 3);
> +   if (prefix)
> +     *p++ = prefix;
> +   if (lfence_before_ret == lfence_before_ret_or)
> +     {
> +       /* orl: 0x830c2400, may add prefix
> + for operand size overwrite or 64-bit code.  */

... also have the same (bogus) suffixe here, but ...

> +       *p++ = 0x83;
> +       *p++ = 0x0c;
> +     }
> +   else
> +     {
> +       /* shl: 0xc1242400, may add prefix
> + for operand size overwrite or 64-bit code.  */

... not here.

Jan


More information about the Binutils mailing list