Skip to content
  • Kevin Newton's avatar
    35cfc9a3
    Remove as many unnecessary moves as possible (#6342) · 35cfc9a3
    Kevin Newton authored
    This commit does a bunch of stuff to try to eliminate as many
    unnecessary mov instructions as possible.
    
    First, it introduces the Insn::LoadInto instruction. Previously
    when we needed a value to go into a specific register (like in
    Insn::CCall when we're putting values into the argument registers
    or in Insn::CRet when we're putting a value into the return
    register) we would first load the value and then mov it into the
    correct register. This resulted in a lot of duplicated work with
    short live ranges since they basically immediately we unnecessary.
    The new instruction accepts a destination and does not interact
    with the register allocator at all, making it much more efficient.
    
    We then use the new instruction when we're loading values into
    argument registers for AArch64 or X86_64, and when we're returning
    a value from AArch64. Notably we don't do it when we're returning
    a value from X86_64 because everything can be accomplished with a
    single mov anyway.
    
    A couple of unnecessary movs were also present because when we
    called the split_load_opnd function in a lot of split passes we
    were loading all registers and instruction outputs. We no longer do
    that.
    
    This commit also makes it so that UImm(0) passes through the
    Insn::Store split without attempting to be loaded, which allows it
    can take advantage of the zero register. So now instead of mov-ing
    0 into a register and then calling store, it just stores XZR.
    35cfc9a3
    Remove as many unnecessary moves as possible (#6342)
    Kevin Newton authored
    This commit does a bunch of stuff to try to eliminate as many
    unnecessary mov instructions as possible.
    
    First, it introduces the Insn::LoadInto instruction. Previously
    when we needed a value to go into a specific register (like in
    Insn::CCall when we're putting values into the argument registers
    or in Insn::CRet when we're putting a value into the return
    register) we would first load the value and then mov it into the
    correct register. This resulted in a lot of duplicated work with
    short live ranges since they basically immediately we unnecessary.
    The new instruction accepts a destination and does not interact
    with the register allocator at all, making it much more efficient.
    
    We then use the new instruction when we're loading values into
    argument registers for AArch64 or X86_64, and when we're returning
    a value from AArch64. Notably we don't do it when we're returning
    a value from X86_64 because everything can be accomplished with a
    single mov anyway.
    
    A couple of unnecessary movs were also present because when we
    called the split_load_opnd function in a lot of split passes we
    were loading all registers and instruction outputs. We no longer do
    that.
    
    This commit also makes it so that UImm(0) passes through the
    Insn::Store split without attempting to be loaded, which allows it
    can take advantage of the zero register. So now instead of mov-ing
    0 into a register and then calling store, it just stores XZR.
Loading