Skip to content

Commit

Permalink
x64 backend: fix condition-code used for part of explicit heap check.
Browse files Browse the repository at this point in the history
A dynamic heap address computation may create up to two conditional
branches: the usual bounds-check, but also (in some cases) an
offset-addition overflow check.

The x64 backend had reversed the condition code for this check,
resulting in an always-trapping execution for a valid offset. I'm
somewhat surprised this has existed so long, but I suppose the
particular conditions (large offset, small offset guard, dynamic heap)
have been somewhat rare in our testing so far.

Found via fuzzing in bytecodealliance#2453.
  • Loading branch information
cfallin committed Dec 2, 2020
1 parent d1662a5 commit c9a81f0
Show file tree
Hide file tree
Showing 2 changed files with 29 additions and 6 deletions.
12 changes: 6 additions & 6 deletions cranelift/codegen/src/isa/x64/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -92,15 +92,15 @@ impl MachBackend for X64Backend {
}

fn unsigned_add_overflow_condition(&self) -> IntCC {
// Unsigned `>=`; this corresponds to the carry flag set on x86, which happens on
// overflow of an add.
IntCC::UnsignedGreaterThanOrEqual
// Unsigned `<`; this corresponds to the carry flag set on x86, which
// indicates an add has overflowed.
IntCC::UnsignedLessThan
}

fn unsigned_sub_overflow_condition(&self) -> IntCC {
// unsigned `>=`; this corresponds to the carry flag set on x86, which happens on
// underflow of a subtract (carry is borrow for subtract).
IntCC::UnsignedGreaterThanOrEqual
// unsigned `<`; this corresponds to the carry flag set on x86, which
// indicates a sub has underflowed (carry is borrow for subtract).
IntCC::UnsignedLessThan
}

#[cfg(feature = "unwind")]
Expand Down
23 changes: 23 additions & 0 deletions cranelift/filetests/filetests/isa/x64/heap.clif
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
test compile
target x86_64
feature "experimental_x64"

function %f(i32, i64 vmctx) -> i64 {
gv0 = vmctx
gv1 = load.i64 notrap aligned gv0+0
gv2 = load.i32 notrap aligned gv0+8
heap0 = dynamic gv1, bound gv2, offset_guard 0x1000, index_type i32

block0(v0: i32, v1: i64):

v2 = heap_addr.i64 heap0, v0, 0x8000
; check: movl 8(%rsi), %r12d
; nextln: movq %rdi, %r13
; nextln: addl $$32768, %r13d
; nextln: jnb ; ud2 heap_oob ;
; nextln: cmpl %r12d, %r13d
; nextln: jbe label1; j label2
; check: Block 1:

return v2
}

0 comments on commit c9a81f0

Please sign in to comment.