[PATCH 1/2] i386: Generate lfence with load/indirect branch/ret [CVE-2020-0551]

Jan Beulich jbeulich@suse.com
Wed Mar 25 10:03:37 GMT 2020


On 25.03.2020 10:27, Hongtao Liu wrote:
> On Thu, Mar 12, 2020 at 12:17 AM H.J. Lu <hjl.tools@gmail.com> wrote:
>> On Wed, Mar 11, 2020 at 3:55 AM Jan Beulich <jbeulich@suse.com> wrote:
>>> On 10.03.2020 17:05, H.J. Lu wrote:
>>>> +  dest = i.operands - 1;
>>>> +
>>>> +  /* Check fake imm8 operand and 3 source operands.  */
>>>> +  if ((i.tm.opcode_modifier.immext
>>>> +       || i.tm.opcode_modifier.vexsources == VEX3SOURCES)
>>>> +      && i.types[dest].bitfield.imm8)
>>>> +    dest--;
>>>> +
>>>> +  /* add, or, adc, sbb, and, sub, xor, cmp, test, xchg, xadd  */
>>>> +  if (!any_vex_p
>>>> +      && (i.tm.base_opcode == 0x0
>>>> +       || i.tm.base_opcode == 0x1
>>>> +       || i.tm.base_opcode == 0x8
>>>> +       || i.tm.base_opcode == 0x9
>>>> +       || i.tm.base_opcode == 0x10
>>>> +       || i.tm.base_opcode == 0x11
>>>> +       || i.tm.base_opcode == 0x18
>>>> +       || i.tm.base_opcode == 0x19
>>>> +       || i.tm.base_opcode == 0x20
>>>> +       || i.tm.base_opcode == 0x21
>>>> +       || i.tm.base_opcode == 0x28
>>>> +       || i.tm.base_opcode == 0x29
>>>> +       || i.tm.base_opcode == 0x30
>>>> +       || i.tm.base_opcode == 0x31
>>>> +       || i.tm.base_opcode == 0x38
>>>> +       || i.tm.base_opcode == 0x39
>>>> +       || (i.tm.base_opcode >= 0x84 && i.tm.base_opcode <= 0x87)
>>>> +       || i.tm.base_opcode == 0xfc0
>>>> +       || i.tm.base_opcode == 0xfc1))
>>>> +    return 1;
>>>
>>> Don't quite a few of these fit very well with ...
>>>
>>
>> Changed.
>>
>>>> +  /* Check for load instruction.  */
>>>> +  return (i.types[dest].bitfield.class != ClassNone
>>>> +       || i.types[dest].bitfield.instance == Accum);
>>>
>>> ... this generic expression? It would seem to me that only TEST
>>> and XCHG need special casing, for allowing either operand order.
>>> Same seems to apply to quite a few of the special cases in the
>>> big "else" block further up, and even its if() [vldmxcsr] part.
>>
>> Hongtao, can you look into it?
>>
> 
> Many instruction take mem operand both as input and output, they also
> need to be handled. But they're not fitting well in generic
> expressions, because they either only 1 operand, or mem operand is in
> the dest place.

Well, my earlier reply wasn't quite precise enough, I think. Aiui
what you're after to exclude are insns only writing their memory
operand. With the pretty long list of excluded opcodes I wonder
whether this can be re-arranged to use a common pattern (memory
operand is destination) and only exclude the few ones which don't
also read this operand. Then again by using a few & and | the list
above could be shrunk significantly, and hence may no longer look
this odd (I notice the committed version has this reduced a little,
but not quite as much as would be possible).

>>>> +  if (lfence_before_ret != lfence_before_ret_none
>>>> +      && (i.tm.base_opcode == 0xc2
>>>> +       || i.tm.base_opcode == 0xc3
>>>> +       || i.tm.base_opcode == 0xca
>>>> +       || i.tm.base_opcode == 0xcb))
>>>> +    {
>>>> +      if (last_insn.kind != last_insn_other
>>>> +       && last_insn.seg == now_seg)
>>>> +     {
>>>> +       as_warn_where (last_insn.file, last_insn.line,
>>>> +                      _("`%s` skips -mlfence-before-ret on `%s`"),
>>>> +                      last_insn.name, i.tm.name);
>>>> +       return;
>>>> +     }
>>>> +      if (lfence_before_ret == lfence_before_ret_or)
>>>> +     {
>>>> +       /* orl: 0x830c2400.  */
>>>> +       p = frag_more ((flag_code == CODE_64BIT ? 1 : 0) + 4 + 3);
>>>> +       if (flag_code == CODE_64BIT)
>>>> +         *p++ = 0x48;
>>>
>>> Shouldn't this depend on RET's operand size? Likewise wouldn't you
>>> also need to insert 0x66/0x67 in certain cases?
>>
>> Hongtao, can you look into it?
> 
> I suppose you mean OR's operand size?

Not exactly - I mean RET's operand size ought to affect the one
chosen for OR.

Jan



More information about the Binutils mailing list