Skip to content

X86 JIT misencodes cbw and cwd #1502

@llvmbot

Description

@llvmbot
Bugzilla Link 1130
Resolution FIXED
Resolved on Feb 22, 2010 12:54
Version trunk
OS All
Attachments Byte-code demonstrating the problem
Reporter LLVM Bugzilla Contributor

Extended Description

$ lli f.bc
lli((anonymous namespace)::PrintStackTrace()+0x1a)[0x878977a]
lli((anonymous namespace)::SignalHandler(int)+0x112)[0x8789a40]
[0xb7f04420]
lli(llvm::ExecutionEngine::runFunctionAsMain(llvm::Function*,
std::vector<std::basic_string<char, std::char_traits,
std::allocator >, std::allocator<std::basic_string<char,
std::char_traits, std::allocator > > > const&, char const*
const*)+0x23a)[0x84cc066]
lli(main+0x2ba)[0x83b6056]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xdc)[0xb7c8cebc]
lli[0x83b58e1]
Floating point exception

vs

$ lli -force-interpreter f.bc
-128 rd 1 = 0

The C back-end agrees with lli -force-interpreter (and so do I!).
Running "bugpoint --run-jit f.bc" reduces it a bit.

I send the byte-code in an attachment in a moment.

PS: Probably I should report this for one of the libraries, but
it is not clear where x86 JIT bugs should go.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions