From: Alessandro Sardo <sandro.sardo-8RLafaVCWuNeoWH0uzbU5w@public.gmane.org>
To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
Subject: Re: KVM-33 + Windows Server 2003 = VMX->OK / SVM->kernel panic?
Date: Tue, 24 Jul 2007 14:45:58 +0200 [thread overview]
Message-ID: <46A5F486.40302@polito.it> (raw)
In-Reply-To: <46A5F029.4000002-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
[-- Attachment #1: Type: text/plain, Size: 3555 bytes --]
There you go.
AS
Avi Kivity ha scritto:
> Alessandro Sardo wrote:
>
> One at a time.
>
>> Test #3
>> CPU model: 1 x Single-Core AMD Athlon64 Processor 3500+ AM2
>> Host: RHEL5 x86_64, kernel 2.6.18-8.1.8.el5
>> KVM-33
>> Result -> boots fine, but when I try to install the SP2 I get the
>> following kernel panic:
>>
>> Unable to handle kernel NULL pointer dereference at 0000000000000000
>> RIP:
>> [<ffffffff883d80ce>] :kvm:x86_emulate_memop+0x2a79/0x3b03
>> PGD 7b8a067 PUD 3973067 PMD 0
>> Oops: 0002 [1] SMP
>> last sysfs file: /class/net/lo/ifindex
>> CPU 0
>> Modules linked in: kvm_amd(U) kvm(U) tun netconsole bridge
>> cpufreq_ondemand video sbs i2c_ec button battery asus_acpi
>> acpi_memhotplug ac lp snd_hda_intel snd_hda_codec snd_seq_dummy
>> snd_seq_oss snd_seq_midi_event snd_seq snd_seq_device snd_pcm_oss
>> floppy snd_mixer_oss snd_pcm pcspkr sg i2c_nforce2 i2c_core snd_timer
>> snd shpchp soundcore k8_edac snd_page_alloc parport_pc edac_mc
>> forcedeth parport ide_cd cdrom serio_raw dm_snapshot dm_zero
>> dm_mirror dm_mod sata_nv libata sd_mod scsi_mod ext3 jbd ehci_hcd
>> ohci_hcd uhci_hcd
>> Pid: 2168, comm: kvm Not tainted 2.6.18-8.1.8.el5 #1
>> RIP: 0010:[<ffffffff883d80ce>] [<ffffffff883d80ce>]
>> :kvm:x86_emulate_memop+0x2a79/0x3b03
>> RSP: 0018:ffff81006225d9d8 EFLAGS: 00010246
>> RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000003
>> RDX: ffff81006225da60 RSI: 0000000000000000 RDI: ffff81006207a130
>> RBP: ffffffff883df621 R08: 0000000000000200 R09: 0000000000000000
>> R10: ffff81003f18d000 R11: ffffffff883eb542 R12: 0000000000000000
>> R13: ffff81006225db78 R14: 0000000000000000 R15: 0000000000000000
>> FS: 00000000ffdff000(0000) GS:ffffffff8038a000(0000)
>> knlGS:0000000000000000
>> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> CR2: 0000000000000000 CR3: 0000000062cd7000 CR4: 00000000000006e0
>> Process kvm (pid: 2168, threadinfo ffff81006225c000, task
>> ffff81007d987100)
>> Stack: ffffffff883df600 03007fffe0e1ce40 00007fffe0e1cea0
>> 0300010000000040
>> ffff81006225db00 0000000000000004 0000000400000000 0300000000000000
>> 0000000000000000 0000000000000004 0000000400000000 0000000000000000
>> Call Trace:
>> [<ffffffff883eb542>] :kvm_amd:svm_get_segment_base+0x0/0x5f
>> [<ffffffff883d5657>] :kvm:x86_emulate_memop+0x2/0x3b03
>> [<ffffffff883d0e95>] :kvm:emulate_instruction+0xee/0x278
>> [<ffffffff883eb542>] :kvm_amd:svm_get_segment_base+0x0/0x5f
>> [<ffffffff883eacec>] :kvm_amd:emulate_on_interception+0xf/0x30
>> [<ffffffff883ebbed>] :kvm_amd:svm_vcpu_run+0x506/0x599
>> [<ffffffff883d11e4>] :kvm:kvm_vcpu_ioctl+0x1c5/0xd04
>> [<ffffffff80044d31>] try_to_wake_up+0x407/0x418
>> [<ffffffff800850ed>] __wake_up_common+0x3e/0x68
>> [<ffffffff8002dd9b>] __wake_up+0x38/0x4f
>> [<ffffffff800d8498>] core_sys_select+0x234/0x265
>> [<ffffffff80093f38>] __dequeue_signal+0x18b/0x19b
>> [<ffffffff80094fd2>] dequeue_signal+0x3c/0xbc
>> [<ffffffff8003fc5a>] do_ioctl+0x21/0x6b
>> [<ffffffff8002fa60>] vfs_ioctl+0x248/0x261
>> [<ffffffff80058bf0>] getnstimeofday+0x10/0x28
>> [<ffffffff8004a266>] sys_ioctl+0x59/0x78
>> [<ffffffff8005b14e>] system_call+0x7e/0x83
>>
>>
>> Code: 4c 89 00 eb 63 48 8b 94 24 f8 00 00 00 48 8b 84 24 08 01 00
>> RIP [<ffffffff883d80ce>] :kvm:x86_emulate_memop+0x2a79/0x3b03
>> RSP <ffff81006225d9d8>
>> CR2: 0000000000000000
>> <0>Kernel panic - not syncing: Fatal exception
>> ----
>>
>
> Can you post the output of 'objdump -Sr kernel/x86_emulate.o'? Please
> ensure that it is exactly the same object used to generate this oops.
[-- Attachment #2: x86_emulate.txt --]
[-- Type: text/plain, Size: 230504 bytes --]
/usr/src/kvm-33/kernel/x86_emulate.o: file format elf64-x86-64
Disassembly of section .text:
0000000000000000 <kvm_emulator_want_group7_invlpg>:
* be mapped.
*/
void kvm_emulator_want_group7_invlpg(void)
{
twobyte_table[1] &= ~SrcMem;
0: 66 83 25 00 00 00 00 andw $0xffffffffffffffef,0(%rip) # 8 <kvm_emulator_want_group7_invlpg+0x8>
7: ef
3: R_X86_64_PC32 .data+0xfffffffffffffffd
}
8: c3 retq
0000000000000009 <decode_register>:
EXPORT_SYMBOL_GPL(kvm_emulator_want_group7_invlpg);
/* Type, address-of, and value of an instruction's operand. */
struct operand {
enum { OP_REG, OP_MEM, OP_IMM } type;
unsigned int bytes;
unsigned long val, orig_val, *ptr;
};
/* EFLAGS bit definitions. */
#define EFLG_OF (1<<11)
#define EFLG_DF (1<<10)
#define EFLG_SF (1<<7)
#define EFLG_ZF (1<<6)
#define EFLG_AF (1<<4)
#define EFLG_PF (1<<2)
#define EFLG_CF (1<<0)
/*
* Instruction emulation:
* Most instructions are emulated directly via a fragment of inline assembly
* code. This allows us to save/restore EFLAGS and thus very easily pick up
* any modified flags.
*/
#if defined(CONFIG_X86_64)
#define _LO32 "k" /* force 32-bit operand */
#define _STK "%%rsp" /* stack pointer */
#elif defined(__i386__)
#define _LO32 "" /* force 32-bit operand */
#define _STK "%%esp" /* stack pointer */
#endif
/*
* These EFLAGS bits are restored from saved value during emulation, and
* any changes are written back to the saved value after emulation.
*/
#define EFLAGS_MASK (EFLG_OF|EFLG_SF|EFLG_ZF|EFLG_AF|EFLG_PF|EFLG_CF)
/* Before executing instruction: restore necessary bits in EFLAGS. */
#define _PRE_EFLAGS(_sav, _msk, _tmp) \
/* EFLAGS = (_sav & _msk) | (EFLAGS & ~_msk); */ \
"push %"_sav"; " \
"movl %"_msk",%"_LO32 _tmp"; " \
"andl %"_LO32 _tmp",("_STK"); " \
"pushf; " \
"notl %"_LO32 _tmp"; " \
"andl %"_LO32 _tmp",("_STK"); " \
"pop %"_tmp"; " \
"orl %"_LO32 _tmp",("_STK"); " \
"popf; " \
/* _sav &= ~msk; */ \
"movl %"_msk",%"_LO32 _tmp"; " \
"notl %"_LO32 _tmp"; " \
"andl %"_LO32 _tmp",%"_sav"; "
/* After executing instruction: write-back necessary bits in EFLAGS. */
#define _POST_EFLAGS(_sav, _msk, _tmp) \
/* _sav |= EFLAGS & _msk; */ \
"pushf; " \
"pop %"_tmp"; " \
"andl %"_msk",%"_LO32 _tmp"; " \
"orl %"_LO32 _tmp",%"_sav"; "
/* Raw emulation: instruction has two explicit operands. */
#define __emulate_2op_nobyte(_op,_src,_dst,_eflags,_wx,_wy,_lx,_ly,_qx,_qy) \
do { \
unsigned long _tmp; \
\
switch ((_dst).bytes) { \
case 2: \
__asm__ __volatile__ ( \
_PRE_EFLAGS("0","4","2") \
_op"w %"_wx"3,%1; " \
_POST_EFLAGS("0","4","2") \
: "=m" (_eflags), "=m" ((_dst).val), \
"=&r" (_tmp) \
: _wy ((_src).val), "i" (EFLAGS_MASK) ); \
break; \
case 4: \
__asm__ __volatile__ ( \
_PRE_EFLAGS("0","4","2") \
_op"l %"_lx"3,%1; " \
_POST_EFLAGS("0","4","2") \
: "=m" (_eflags), "=m" ((_dst).val), \
"=&r" (_tmp) \
: _ly ((_src).val), "i" (EFLAGS_MASK) ); \
break; \
case 8: \
__emulate_2op_8byte(_op, _src, _dst, \
_eflags, _qx, _qy); \
break; \
} \
} while (0)
#define __emulate_2op(_op,_src,_dst,_eflags,_bx,_by,_wx,_wy,_lx,_ly,_qx,_qy) \
do { \
unsigned long _tmp; \
switch ( (_dst).bytes ) \
{ \
case 1: \
__asm__ __volatile__ ( \
_PRE_EFLAGS("0","4","2") \
_op"b %"_bx"3,%1; " \
_POST_EFLAGS("0","4","2") \
: "=m" (_eflags), "=m" ((_dst).val), \
"=&r" (_tmp) \
: _by ((_src).val), "i" (EFLAGS_MASK) ); \
break; \
default: \
__emulate_2op_nobyte(_op, _src, _dst, _eflags, \
_wx, _wy, _lx, _ly, _qx, _qy); \
break; \
} \
} while (0)
/* Source operand is byte-sized and may be restricted to just %cl. */
#define emulate_2op_SrcB(_op, _src, _dst, _eflags) \
__emulate_2op(_op, _src, _dst, _eflags, \
"b", "c", "b", "c", "b", "c", "b", "c")
/* Source operand is byte, word, long or quad sized. */
#define emulate_2op_SrcV(_op, _src, _dst, _eflags) \
__emulate_2op(_op, _src, _dst, _eflags, \
"b", "q", "w", "r", _LO32, "r", "", "r")
/* Source operand is word, long or quad sized. */
#define emulate_2op_SrcV_nobyte(_op, _src, _dst, _eflags) \
__emulate_2op_nobyte(_op, _src, _dst, _eflags, \
"w", "r", _LO32, "r", "", "r")
/* Instruction has only one explicit operand (no source operand). */
#define emulate_1op(_op, _dst, _eflags) \
do { \
unsigned long _tmp; \
\
switch ( (_dst).bytes ) \
{ \
case 1: \
__asm__ __volatile__ ( \
_PRE_EFLAGS("0","3","2") \
_op"b %1; " \
_POST_EFLAGS("0","3","2") \
: "=m" (_eflags), "=m" ((_dst).val), \
"=&r" (_tmp) \
: "i" (EFLAGS_MASK) ); \
break; \
case 2: \
__asm__ __volatile__ ( \
_PRE_EFLAGS("0","3","2") \
_op"w %1; " \
_POST_EFLAGS("0","3","2") \
: "=m" (_eflags), "=m" ((_dst).val), \
"=&r" (_tmp) \
: "i" (EFLAGS_MASK) ); \
break; \
case 4: \
__asm__ __volatile__ ( \
_PRE_EFLAGS("0","3","2") \
_op"l %1; " \
_POST_EFLAGS("0","3","2") \
: "=m" (_eflags), "=m" ((_dst).val), \
"=&r" (_tmp) \
: "i" (EFLAGS_MASK) ); \
break; \
case 8: \
__emulate_1op_8byte(_op, _dst, _eflags); \
break; \
} \
} while (0)
/* Emulate an instruction with quadword operands (x86/64 only). */
#if defined(CONFIG_X86_64)
#define __emulate_2op_8byte(_op, _src, _dst, _eflags, _qx, _qy) \
do { \
__asm__ __volatile__ ( \
_PRE_EFLAGS("0","4","2") \
_op"q %"_qx"3,%1; " \
_POST_EFLAGS("0","4","2") \
: "=m" (_eflags), "=m" ((_dst).val), "=&r" (_tmp) \
: _qy ((_src).val), "i" (EFLAGS_MASK) ); \
} while (0)
#define __emulate_1op_8byte(_op, _dst, _eflags) \
do { \
__asm__ __volatile__ ( \
_PRE_EFLAGS("0","3","2") \
_op"q %1; " \
_POST_EFLAGS("0","3","2") \
: "=m" (_eflags), "=m" ((_dst).val), "=&r" (_tmp) \
: "i" (EFLAGS_MASK) ); \
} while (0)
#elif defined(__i386__)
#define __emulate_2op_8byte(_op, _src, _dst, _eflags, _qx, _qy)
#define __emulate_1op_8byte(_op, _dst, _eflags)
#endif /* __i386__ */
/* Fetch next part of the instruction being emulated. */
#define insn_fetch(_type, _size, _eip) \
({ unsigned long _x; \
rc = ops->read_std((unsigned long)(_eip) + ctxt->cs_base, &_x, \
(_size), ctxt); \
if ( rc != 0 ) \
goto done; \
(_eip) += (_size); \
(_type)_x; \
})
/* Access/update address held in a register, based on addressing mode. */
#define register_address(base, reg) \
((base) + ((ad_bytes == sizeof(unsigned long)) ? (reg) : \
((reg) & ((1UL << (ad_bytes << 3)) - 1))))
#define register_address_increment(reg, inc) \
do { \
/* signed type ensures sign extension to long */ \
int _inc = (inc); \
if ( ad_bytes == sizeof(unsigned long) ) \
(reg) += _inc; \
else \
(reg) = ((reg) & ~((1UL << (ad_bytes << 3)) - 1)) | \
(((reg) + _inc) & ((1UL << (ad_bytes << 3)) - 1)); \
} while (0)
/*
* Given the 'reg' portion of a ModRM byte, and a register block, return a
* pointer into the block that addresses the relevant register.
* @highbyte_regs specifies whether to decode AH,CH,DH,BH.
*/
static void *decode_register(u8 modrm_reg, unsigned long *regs,
int highbyte_regs)
{
9: 40 88 f9 mov %dil,%cl
void *p;
p = ®s[modrm_reg];
if (highbyte_regs && modrm_reg >= 4 && modrm_reg < 8)
c: 85 d2 test %edx,%edx
e: 40 0f b6 ff movzbl %dil,%edi
12: 48 8d 04 fe lea (%rsi,%rdi,8),%rax
16: 74 12 je 2a <decode_register+0x21>
18: 80 f9 03 cmp $0x3,%cl
1b: 76 0d jbe 2a <decode_register+0x21>
1d: 80 f9 07 cmp $0x7,%cl
20: 77 08 ja 2a <decode_register+0x21>
p = (unsigned char *)®s[modrm_reg & 3] + 1;
22: 83 e7 03 and $0x3,%edi
25: 48 8d 44 fe 01 lea 0x1(%rsi,%rdi,8),%rax
return p;
}
2a: c3 retq
000000000000002b <read_descriptor>:
static int read_descriptor(struct x86_emulate_ctxt *ctxt,
struct x86_emulate_ops *ops,
void *ptr,
u16 *size, unsigned long *address, int op_bytes)
{
2b: 41 56 push %r14
int rc;
if (op_bytes == 2)
2d: 41 83 f9 02 cmp $0x2,%r9d
31: 49 89 f6 mov %rsi,%r14
34: b8 03 00 00 00 mov $0x3,%eax
39: 48 89 ce mov %rcx,%rsi
op_bytes = 3;
*address = 0;
3c: 49 c7 00 00 00 00 00 movq $0x0,(%r8)
43: 41 55 push %r13
rc = ops->read_std((unsigned long)ptr, (unsigned long *)size, 2, ctxt);
45: 48 89 f9 mov %rdi,%rcx
48: 49 89 fd mov %rdi,%r13
4b: 41 54 push %r12
4d: 4d 89 c4 mov %r8,%r12
50: 55 push %rbp
51: 48 89 d5 mov %rdx,%rbp
54: ba 02 00 00 00 mov $0x2,%edx
59: 48 89 ef mov %rbp,%rdi
5c: 53 push %rbx
5d: 44 89 cb mov %r9d,%ebx
60: 0f 44 d8 cmove %eax,%ebx
63: 41 ff 16 callq *(%r14)
if (rc)
66: 85 c0 test %eax,%eax
68: 75 1a jne 84 <read_descriptor+0x59>
return rc;
rc = ops->read_std((unsigned long)ptr + 2, address, op_bytes, ctxt);
6a: 89 da mov %ebx,%edx
6c: 48 8d 7d 02 lea 0x2(%rbp),%rdi
70: 4c 89 e6 mov %r12,%rsi
return rc;
}
73: 5b pop %rbx
74: 5d pop %rbp
75: 41 5c pop %r12
77: 4c 89 e9 mov %r13,%rcx
7a: 4d 8b 1e mov (%r14),%r11
7d: 41 5d pop %r13
7f: 41 5e pop %r14
81: 41 ff e3 jmpq *%r11
84: 5b pop %rbx
85: 5d pop %rbp
86: 41 5c pop %r12
88: 41 5d pop %r13
8a: 41 5e pop %r14
8c: c3 retq
000000000000008d <x86_emulate_memop>:
int
x86_emulate_memop(struct x86_emulate_ctxt *ctxt, struct x86_emulate_ops *ops)
{
8d: 41 57 push %r15
unsigned d;
u8 b, sib, twobyte = 0, rex_prefix = 0;
u8 modrm, modrm_mod = 0, modrm_reg = 0, modrm_rm = 0;
unsigned long *override_base = NULL;
unsigned int op_bytes, ad_bytes, lock_prefix = 0, rep_prefix = 0, i;
int rc = 0;
struct operand src, dst;
unsigned long cr2 = ctxt->cr2;
int mode = ctxt->mode;
unsigned long modrm_ea;
int use_modrm_ea, index_reg = 0, base_reg = 0, scale, rip_relative = 0;
int no_wb = 0;
u64 msr_data;
/* Shadow copy of register state. Committed on successful emulation. */
unsigned long _regs[NR_VCPU_REGS];
unsigned long _eip = ctxt->vcpu->rip, _eflags = ctxt->eflags;
unsigned long modrm_val = 0;
memcpy(_regs, ctxt->vcpu->regs, sizeof _regs);
8f: ba 80 00 00 00 mov $0x80,%edx
94: 41 56 push %r14
96: 41 55 push %r13
98: 49 89 fd mov %rdi,%r13
9b: 41 54 push %r12
9d: 55 push %rbp
9e: 53 push %rbx
9f: 48 81 ec 68 01 00 00 sub $0x168,%rsp
a6: 48 89 34 24 mov %rsi,(%rsp)
aa: 8b 47 18 mov 0x18(%rdi),%eax
ad: 4c 8b 67 10 mov 0x10(%rdi),%r12
b1: 89 44 24 54 mov %eax,0x54(%rsp)
b5: 48 8b 07 mov (%rdi),%rax
b8: 48 8b 80 00 01 00 00 mov 0x100(%rax),%rax
bf: 48 89 84 24 50 01 00 mov %rax,0x150(%rsp)
c6: 00
c7: 48 8b 47 08 mov 0x8(%rdi),%rax
cb: 48 89 84 24 48 01 00 mov %rax,0x148(%rsp)
d2: 00
d3: 48 8b 37 mov (%rdi),%rsi
d6: 48 8d 7c 24 70 lea 0x70(%rsp),%rdi
db: 48 83 ee 80 sub $0xffffffffffffff80,%rsi
df: e8 00 00 00 00 callq e4 <x86_emulate_memop+0x57>
e0: R_X86_64_PC32 __memcpy+0xfffffffffffffffc
switch (mode) {
e4: 83 7c 24 54 02 cmpl $0x2,0x54(%rsp)
e9: 74 46 je 131 <x86_emulate_memop+0xa4>
eb: 7f 0c jg f9 <x86_emulate_memop+0x6c>
ed: 83 7c 24 54 00 cmpl $0x0,0x54(%rsp)
f2: 74 3d je 131 <x86_emulate_memop+0xa4>
f4: e9 58 3a 00 00 jmpq 3b51 <x86_emulate_memop+0x3ac4>
f9: 83 7c 24 54 04 cmpl $0x4,0x54(%rsp)
fe: 74 0d je 10d <x86_emulate_memop+0x80>
100: 83 7c 24 54 08 cmpl $0x8,0x54(%rsp)
105: 0f 85 46 3a 00 00 jne 3b51 <x86_emulate_memop+0x3ac4>
10b: eb 12 jmp 11f <x86_emulate_memop+0x92>
10d: c7 44 24 6c 04 00 00 movl $0x4,0x6c(%rsp)
114: 00
115: c7 44 24 48 04 00 00 movl $0x4,0x48(%rsp)
11c: 00
11d: eb 22 jmp 141 <x86_emulate_memop+0xb4>
11f: c7 44 24 6c 04 00 00 movl $0x4,0x6c(%rsp)
126: 00
127: c7 44 24 48 08 00 00 movl $0x8,0x48(%rsp)
12e: 00
12f: eb 10 jmp 141 <x86_emulate_memop+0xb4>
131: c7 44 24 6c 02 00 00 movl $0x2,0x6c(%rsp)
138: 00
139: c7 44 24 48 02 00 00 movl $0x2,0x48(%rsp)
140: 00
141: 48 c7 44 24 40 00 00 movq $0x0,0x40(%rsp)
148: 00 00
14a: c7 44 24 4c 00 00 00 movl $0x0,0x4c(%rsp)
151: 00
152: 31 db xor %ebx,%ebx
154: c7 44 24 50 00 00 00 movl $0x0,0x50(%rsp)
15b: 00
case X86EMUL_MODE_REAL:
case X86EMUL_MODE_PROT16:
op_bytes = ad_bytes = 2;
break;
case X86EMUL_MODE_PROT32:
op_bytes = ad_bytes = 4;
break;
#ifdef CONFIG_X86_64
case X86EMUL_MODE_PROT64:
op_bytes = 4;
ad_bytes = 8;
break;
#endif
default:
return -1;
}
/* Legacy prefixes. */
for (i = 0; i < 8; i++) {
switch (b = insn_fetch(u8, 1, _eip)) {
15c: 48 8b 2c 24 mov (%rsp),%rbp
160: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
167: 00
168: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
16f: 00
170: 49 03 7d 20 add 0x20(%r13),%rdi
174: 4c 89 e9 mov %r13,%rcx
177: ba 01 00 00 00 mov $0x1,%edx
17c: ff 55 00 callq *0x0(%rbp)
17f: 85 c0 test %eax,%eax
181: 41 89 c7 mov %eax,%r15d
184: 0f 85 1a 2a 00 00 jne 2ba4 <x86_emulate_memop+0x2b17>
18a: 40 8a ac 24 38 01 00 mov 0x138(%rsp),%bpl
191: 00
192: 48 ff 84 24 50 01 00 incq 0x150(%rsp)
199: 00
19a: 40 80 fd 65 cmp $0x65,%bpl
19e: 0f 84 9c 00 00 00 je 240 <x86_emulate_memop+0x1b3>
1a4: 77 30 ja 1d6 <x86_emulate_memop+0x149>
1a6: 40 80 fd 36 cmp $0x36,%bpl
1aa: 0f 84 9b 00 00 00 je 24b <x86_emulate_memop+0x1be>
1b0: 77 12 ja 1c4 <x86_emulate_memop+0x137>
1b2: 40 80 fd 26 cmp $0x26,%bpl
1b6: 74 72 je 22a <x86_emulate_memop+0x19d>
1b8: 40 80 fd 2e cmp $0x2e,%bpl
1bc: 0f 85 a7 00 00 00 jne 269 <x86_emulate_memop+0x1dc>
1c2: eb 5a jmp 21e <x86_emulate_memop+0x191>
1c4: 40 80 fd 3e cmp $0x3e,%bpl
1c8: 74 5a je 224 <x86_emulate_memop+0x197>
1ca: 40 80 fd 64 cmp $0x64,%bpl
1ce: 0f 85 95 00 00 00 jne 269 <x86_emulate_memop+0x1dc>
1d4: eb 5f jmp 235 <x86_emulate_memop+0x1a8>
1d6: 40 80 fd f0 cmp $0xf0,%bpl
1da: 74 7a je 256 <x86_emulate_memop+0x1c9>
1dc: 77 0e ja 1ec <x86_emulate_memop+0x15f>
1de: 40 80 fd 66 cmp $0x66,%bpl
1e2: 74 1e je 202 <x86_emulate_memop+0x175>
1e4: 40 80 fd 67 cmp $0x67,%bpl
1e8: 75 7f jne 269 <x86_emulate_memop+0x1dc>
1ea: eb 1d jmp 209 <x86_emulate_memop+0x17c>
1ec: 40 80 fd f2 cmp $0xf2,%bpl
1f0: 74 6c je 25e <x86_emulate_memop+0x1d1>
1f2: 40 80 fd f3 cmp $0xf3,%bpl
1f6: 75 71 jne 269 <x86_emulate_memop+0x1dc>
1f8: c7 44 24 50 01 00 00 movl $0x1,0x50(%rsp)
1ff: 00
200: eb 5c jmp 25e <x86_emulate_memop+0x1d1>
case 0x66: /* operand-size override */
op_bytes ^= 6; /* switch between 2/4 bytes */
202: 83 74 24 6c 06 xorl $0x6,0x6c(%rsp)
207: eb 55 jmp 25e <x86_emulate_memop+0x1d1>
break;
case 0x67: /* address-size override */
if (mode == X86EMUL_MODE_PROT64)
209: 83 7c 24 54 08 cmpl $0x8,0x54(%rsp)
20e: 75 07 jne 217 <x86_emulate_memop+0x18a>
ad_bytes ^= 12; /* switch between 4/8 bytes */
210: 83 74 24 48 0c xorl $0xc,0x48(%rsp)
215: eb 47 jmp 25e <x86_emulate_memop+0x1d1>
else
ad_bytes ^= 6; /* switch between 2/4 bytes */
217: 83 74 24 48 06 xorl $0x6,0x48(%rsp)
21c: eb 40 jmp 25e <x86_emulate_memop+0x1d1>
break;
case 0x2e: /* CS override */
override_base = &ctxt->cs_base;
21e: 4d 8d 45 20 lea 0x20(%r13),%r8
222: eb 20 jmp 244 <x86_emulate_memop+0x1b7>
break;
case 0x3e: /* DS override */
override_base = &ctxt->ds_base;
224: 49 8d 45 28 lea 0x28(%r13),%rax
228: eb 25 jmp 24f <x86_emulate_memop+0x1c2>
break;
case 0x26: /* ES override */
override_base = &ctxt->es_base;
22a: 49 8d 55 30 lea 0x30(%r13),%rdx
22e: 48 89 54 24 40 mov %rdx,0x40(%rsp)
233: eb 29 jmp 25e <x86_emulate_memop+0x1d1>
break;
case 0x64: /* FS override */
override_base = &ctxt->fs_base;
235: 49 8d 4d 48 lea 0x48(%r13),%rcx
239: 48 89 4c 24 40 mov %rcx,0x40(%rsp)
23e: eb 1e jmp 25e <x86_emulate_memop+0x1d1>
break;
case 0x65: /* GS override */
override_base = &ctxt->gs_base;
240: 4d 8d 45 40 lea 0x40(%r13),%r8
244: 4c 89 44 24 40 mov %r8,0x40(%rsp)
249: eb 13 jmp 25e <x86_emulate_memop+0x1d1>
break;
case 0x36: /* SS override */
override_base = &ctxt->ss_base;
24b: 49 8d 45 38 lea 0x38(%r13),%rax
24f: 48 89 44 24 40 mov %rax,0x40(%rsp)
254: eb 08 jmp 25e <x86_emulate_memop+0x1d1>
break;
256: c7 44 24 4c 01 00 00 movl $0x1,0x4c(%rsp)
25d: 00
25e: ff c3 inc %ebx
260: 83 fb 08 cmp $0x8,%ebx
263: 0f 85 f3 fe ff ff jne 15c <x86_emulate_memop+0xcf>
case 0xf0: /* LOCK */
lock_prefix = 1;
break;
case 0xf3: /* REP/REPE/REPZ */
rep_prefix = 1;
break;
case 0xf2: /* REPNE/REPNZ */
break;
default:
goto done_prefixes;
}
}
done_prefixes:
/* REX prefix. */
if ((mode == X86EMUL_MODE_PROT64) && ((b & 0xf0) == 0x40)) {
269: 83 7c 24 54 08 cmpl $0x8,0x54(%rsp)
26e: 0f 85 84 00 00 00 jne 2f8 <x86_emulate_memop+0x26b>
274: 40 0f b6 dd movzbl %bpl,%ebx
278: 89 d8 mov %ebx,%eax
27a: 25 f0 00 00 00 and $0xf0,%eax
27f: 83 f8 40 cmp $0x40,%eax
282: 75 74 jne 2f8 <x86_emulate_memop+0x26b>
rex_prefix = b;
if (b & 8)
284: f6 c3 08 test $0x8,%bl
287: 8b 54 24 54 mov 0x54(%rsp),%edx
28b: 0f 44 54 24 6c cmove 0x6c(%rsp),%edx
op_bytes = 8; /* REX.W */
modrm_reg = (b & 4) << 1; /* REX.R */
index_reg = (b & 2) << 2; /* REX.X */
modrm_rm = base_reg = (b & 1) << 3; /* REG.B */
b = insn_fetch(u8, 1, _eip);
290: 4c 8b 04 24 mov (%rsp),%r8
294: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
29b: 00
29c: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
2a3: 00
2a4: 49 03 7d 20 add 0x20(%r13),%rdi
2a8: 4c 89 e9 mov %r13,%rcx
2ab: 89 54 24 6c mov %edx,0x6c(%rsp)
2af: ba 01 00 00 00 mov $0x1,%edx
2b4: 41 ff 10 callq *(%r8)
2b7: 85 c0 test %eax,%eax
2b9: 0f 85 b1 07 00 00 jne a70 <x86_emulate_memop+0x9e3>
2bf: 40 88 e8 mov %bpl,%al
2c2: 48 ff 84 24 50 01 00 incq 0x150(%rsp)
2c9: 00
2ca: 40 88 6c 24 1e mov %bpl,0x1e(%rsp)
2cf: 83 e0 04 and $0x4,%eax
2d2: 40 8a ac 24 38 01 00 mov 0x138(%rsp),%bpl
2d9: 00
2da: 01 c0 add %eax,%eax
2dc: 88 44 24 20 mov %al,0x20(%rsp)
2e0: 89 d8 mov %ebx,%eax
2e2: 83 e3 01 and $0x1,%ebx
2e5: 83 e0 02 and $0x2,%eax
2e8: c1 e3 03 shl $0x3,%ebx
2eb: c1 e0 02 shl $0x2,%eax
2ee: 88 5c 24 3f mov %bl,0x3f(%rsp)
2f2: 89 44 24 58 mov %eax,0x58(%rsp)
2f6: eb 19 jmp 311 <x86_emulate_memop+0x284>
2f8: c6 44 24 1e 00 movb $0x0,0x1e(%rsp)
2fd: c6 44 24 20 00 movb $0x0,0x20(%rsp)
302: 31 db xor %ebx,%ebx
304: c6 44 24 3f 00 movb $0x0,0x3f(%rsp)
309: c7 44 24 58 00 00 00 movl $0x0,0x58(%rsp)
310: 00
}
/* Opcode byte(s). */
d = opcode_table[b];
311: 40 0f b6 c5 movzbl %bpl,%eax
315: 8a 80 00 00 00 00 mov 0x0(%rax),%al
317: R_X86_64_32S .rodata+0x140
if (d == 0) {
31b: 84 c0 test %al,%al
31d: 74 0e je 32d <x86_emulate_memop+0x2a0>
31f: 0f b6 c0 movzbl %al,%eax
322: c6 44 24 1d 00 movb $0x0,0x1d(%rsp)
327: 89 44 24 18 mov %eax,0x18(%rsp)
32b: eb 66 jmp 393 <x86_emulate_memop+0x306>
/* Two-byte opcode? */
if (b == 0x0f) {
32d: 40 80 fd 0f cmp $0xf,%bpl
331: 0f 85 1a 38 00 00 jne 3b51 <x86_emulate_memop+0x3ac4>
twobyte = 1;
b = insn_fetch(u8, 1, _eip);
337: 48 8b 2c 24 mov (%rsp),%rbp
33b: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
342: 00
343: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
34a: 00
34b: 49 03 7d 20 add 0x20(%r13),%rdi
34f: 4c 89 e9 mov %r13,%rcx
352: ba 01 00 00 00 mov $0x1,%edx
357: ff 55 00 callq *0x0(%rbp)
35a: 85 c0 test %eax,%eax
35c: 0f 85 0e 07 00 00 jne a70 <x86_emulate_memop+0x9e3>
362: 40 8a ac 24 38 01 00 mov 0x138(%rsp),%bpl
369: 00
36a: 48 ff 84 24 50 01 00 incq 0x150(%rsp)
371: 00
d = twobyte_table[b];
372: 40 0f b6 c5 movzbl %bpl,%eax
376: 66 8b 84 00 00 00 00 mov 0x0(%rax,%rax,1),%ax
37d: 00
37a: R_X86_64_32S .data
}
/* Unrecognised? */
if (d == 0)
37e: 66 85 c0 test %ax,%ax
381: 0f 84 ca 37 00 00 je 3b51 <x86_emulate_memop+0x3ac4>
387: 0f b7 c0 movzwl %ax,%eax
38a: c6 44 24 1d 01 movb $0x1,0x1d(%rsp)
38f: 89 44 24 18 mov %eax,0x18(%rsp)
goto cannot_emulate;
}
/* ModRM and SIB bytes. */
if (d & ModRM) {
393: 45 31 f6 xor %r14d,%r14d
396: f6 44 24 18 40 testb $0x40,0x18(%rsp)
39b: c6 44 24 1f 00 movb $0x0,0x1f(%rsp)
3a0: 0f 84 2a 04 00 00 je 7d0 <x86_emulate_memop+0x743>
modrm = insn_fetch(u8, 1, _eip);
3a6: 4c 8b 04 24 mov (%rsp),%r8
3aa: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
3b1: 00
3b2: 4c 89 e9 mov %r13,%rcx
3b5: 49 03 7d 20 add 0x20(%r13),%rdi
3b9: ba 01 00 00 00 mov $0x1,%edx
3be: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
3c5: 00
3c6: 41 ff 10 callq *(%r8)
3c9: 85 c0 test %eax,%eax
3cb: 0f 85 9f 06 00 00 jne a70 <x86_emulate_memop+0x9e3>
3d1: 8a 84 24 38 01 00 00 mov 0x138(%rsp),%al
3d8: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
3df: 00
modrm_mod |= (modrm & 0xc0) >> 6;
3e0: 0f b6 d0 movzbl %al,%edx
modrm_reg |= (modrm & 0x38) >> 3;
modrm_rm |= (modrm & 0x07);
3e3: 83 e0 07 and $0x7,%eax
3e6: 48 ff c7 inc %rdi
3e9: 41 88 c6 mov %al,%r14b
3ec: 44 0a 74 24 3f or 0x3f(%rsp),%r14b
3f1: 89 d1 mov %edx,%ecx
3f3: c1 e9 06 shr $0x6,%ecx
3f6: 83 e2 38 and $0x38,%edx
3f9: 48 89 bc 24 50 01 00 mov %rdi,0x150(%rsp)
400: 00
401: c1 fa 03 sar $0x3,%edx
404: 08 54 24 20 or %dl,0x20(%rsp)
modrm_ea = 0;
use_modrm_ea = 1;
if (modrm_mod == 3) {
408: 80 f9 03 cmp $0x3,%cl
40b: 88 4c 24 0f mov %cl,0xf(%rsp)
40f: 88 4c 24 1f mov %cl,0x1f(%rsp)
413: 44 88 74 24 3f mov %r14b,0x3f(%rsp)
418: 75 1d jne 437 <x86_emulate_memop+0x3aa>
modrm_val = *(unsigned long *)
41a: 8b 54 24 18 mov 0x18(%rsp),%edx
41e: 48 8d 74 24 70 lea 0x70(%rsp),%rsi
423: 41 0f b6 fe movzbl %r14b,%edi
427: 83 e2 01 and $0x1,%edx
42a: e8 da fb ff ff callq 9 <decode_register>
42f: 4c 8b 30 mov (%rax),%r14
432: e9 99 03 00 00 jmpq 7d0 <x86_emulate_memop+0x743>
decode_register(modrm_rm, _regs, d & ByteOp);
goto modrm_done;
}
if (ad_bytes == 2) {
437: 83 7c 24 48 02 cmpl $0x2,0x48(%rsp)
43c: 0f 85 3f 01 00 00 jne 581 <x86_emulate_memop+0x4f4>
unsigned bx = _regs[VCPU_REGS_RBX];
unsigned bp = _regs[VCPU_REGS_RBP];
unsigned si = _regs[VCPU_REGS_RSI];
unsigned di = _regs[VCPU_REGS_RDI];
/* 16-bit ModR/M decode. */
switch (modrm_mod) {
442: 80 7c 24 1f 01 cmpb $0x1,0x1f(%rsp)
447: 48 8b 84 24 98 00 00 mov 0x98(%rsp),%rax
44e: 00
44f: 48 8b 94 24 a8 00 00 mov 0xa8(%rsp),%rdx
456: 00
457: 48 8b 9c 24 88 00 00 mov 0x88(%rsp),%rbx
45e: 00
45f: 4c 8b a4 24 a0 00 00 mov 0xa0(%rsp),%r12
466: 00
467: 48 89 44 24 10 mov %rax,0x10(%rsp)
46c: 48 89 54 24 60 mov %rdx,0x60(%rsp)
471: 74 12 je 485 <x86_emulate_memop+0x3f8>
473: 72 07 jb 47c <x86_emulate_memop+0x3ef>
475: 80 7c 24 1f 02 cmpb $0x2,0x1f(%rsp)
47a: eb 05 jmp 481 <x86_emulate_memop+0x3f4>
case 0:
if (modrm_rm == 6)
47c: 80 7c 24 3f 06 cmpb $0x6,0x3f(%rsp)
481: 75 6e jne 4f1 <x86_emulate_memop+0x464>
483: eb 36 jmp 4bb <x86_emulate_memop+0x42e>
modrm_ea += insn_fetch(u16, 2, _eip);
break;
case 1:
modrm_ea += insn_fetch(s8, 1, _eip);
485: 4c 8b 04 24 mov (%rsp),%r8
489: 49 03 7d 20 add 0x20(%r13),%rdi
48d: 4c 89 e9 mov %r13,%rcx
490: ba 01 00 00 00 mov $0x1,%edx
495: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
49c: 00
49d: 41 ff 10 callq *(%r8)
4a0: 85 c0 test %eax,%eax
4a2: 0f 85 c8 05 00 00 jne a70 <x86_emulate_memop+0x9e3>
4a8: 48 ff 84 24 50 01 00 incq 0x150(%rsp)
4af: 00
4b0: 48 0f be 94 24 38 01 movsbq 0x138(%rsp),%rdx
4b7: 00 00
4b9: eb 38 jmp 4f3 <x86_emulate_memop+0x466>
break;
case 2:
modrm_ea += insn_fetch(u16, 2, _eip);
4bb: 4c 8b 04 24 mov (%rsp),%r8
4bf: 49 03 7d 20 add 0x20(%r13),%rdi
4c3: 4c 89 e9 mov %r13,%rcx
4c6: ba 02 00 00 00 mov $0x2,%edx
4cb: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
4d2: 00
4d3: 41 ff 10 callq *(%r8)
4d6: 85 c0 test %eax,%eax
4d8: 0f 85 92 05 00 00 jne a70 <x86_emulate_memop+0x9e3>
4de: 48 83 84 24 50 01 00 addq $0x2,0x150(%rsp)
4e5: 00 02
4e7: 0f b7 94 24 38 01 00 movzwl 0x138(%rsp),%edx
4ee: 00
4ef: eb 02 jmp 4f3 <x86_emulate_memop+0x466>
4f1: 31 d2 xor %edx,%edx
break;
}
switch (modrm_rm) {
4f3: 41 80 fe 07 cmp $0x7,%r14b
4f7: 89 df mov %ebx,%edi
4f9: 44 8b 44 24 10 mov 0x10(%rsp),%r8d
4fe: 44 89 e6 mov %r12d,%esi
501: 8b 4c 24 60 mov 0x60(%rsp),%ecx
505: 77 40 ja 547 <x86_emulate_memop+0x4ba>
507: 41 0f b6 c6 movzbl %r14b,%eax
50b: ff 24 c5 00 00 00 00 jmpq *0x0(,%rax,8)
50e: R_X86_64_32S .rodata
case 0:
modrm_ea += bx + si;
512: 8d 04 3e lea (%rsi,%rdi,1),%eax
515: eb 03 jmp 51a <x86_emulate_memop+0x48d>
break;
case 1:
modrm_ea += bx + di;
517: 8d 04 39 lea (%rcx,%rdi,1),%eax
51a: 48 01 c2 add %rax,%rdx
51d: eb 30 jmp 54f <x86_emulate_memop+0x4c2>
break;
case 2:
modrm_ea += bp + si;
51f: 42 8d 04 06 lea (%rsi,%r8,1),%eax
523: eb 04 jmp 529 <x86_emulate_memop+0x49c>
break;
case 3:
modrm_ea += bp + di;
525: 42 8d 04 01 lea (%rcx,%r8,1),%eax
529: 48 01 c2 add %rax,%rdx
52c: eb 2e jmp 55c <x86_emulate_memop+0x4cf>
break;
case 4:
modrm_ea += si;
52e: 44 89 e0 mov %r12d,%eax
531: eb e7 jmp 51a <x86_emulate_memop+0x48d>
break;
case 5:
modrm_ea += di;
533: 89 c8 mov %ecx,%eax
535: eb e3 jmp 51a <x86_emulate_memop+0x48d>
break;
case 6:
if (modrm_mod != 0)
537: 80 7c 24 0f 00 cmpb $0x0,0xf(%rsp)
modrm_ea += bp;
53c: 44 89 c0 mov %r8d,%eax
53f: 75 d9 jne 51a <x86_emulate_memop+0x48d>
541: eb 0c jmp 54f <x86_emulate_memop+0x4c2>
break;
case 7:
modrm_ea += bx;
543: 89 d8 mov %ebx,%eax
545: eb d3 jmp 51a <x86_emulate_memop+0x48d>
break;
}
if (modrm_rm == 2 || modrm_rm == 3 ||
547: 41 8d 46 fe lea 0xfffffffffffffffe(%r14),%eax
54b: 3c 01 cmp $0x1,%al
54d: 76 0d jbe 55c <x86_emulate_memop+0x4cf>
54f: 41 80 fe 06 cmp $0x6,%r14b
553: 75 1c jne 571 <x86_emulate_memop+0x4e4>
555: 80 7c 24 1f 00 cmpb $0x0,0x1f(%rsp)
55a: 74 15 je 571 <x86_emulate_memop+0x4e4>
(modrm_rm == 6 && modrm_mod != 0))
if (!override_base)
override_base = &ctxt->ss_base;
55c: 48 83 7c 24 40 00 cmpq $0x0,0x40(%rsp)
562: 49 8d 45 38 lea 0x38(%r13),%rax
566: 48 0f 45 44 24 40 cmovne 0x40(%rsp),%rax
56c: 48 89 44 24 40 mov %rax,0x40(%rsp)
modrm_ea = (u16)modrm_ea;
571: 0f b7 da movzwl %dx,%ebx
574: c7 44 24 5c 00 00 00 movl $0x0,0x5c(%rsp)
57b: 00
57c: e9 b0 01 00 00 jmpq 731 <x86_emulate_memop+0x6a4>
} else {
/* 32/64-bit ModR/M decode. */
switch (modrm_rm) {
581: 80 7c 24 3f 05 cmpb $0x5,0x3f(%rsp)
586: 0f b6 44 24 3f movzbl 0x3f(%rsp),%eax
58b: 0f 84 da 00 00 00 je 66b <x86_emulate_memop+0x5de>
591: 80 7c 24 3f 0c cmpb $0xc,0x3f(%rsp)
596: 74 0b je 5a3 <x86_emulate_memop+0x516>
598: 80 7c 24 3f 04 cmpb $0x4,0x3f(%rsp)
59d: 0f 85 ea 00 00 00 jne 68d <x86_emulate_memop+0x600>
case 4:
case 12:
sib = insn_fetch(u8, 1, _eip);
5a3: 4c 8b 04 24 mov (%rsp),%r8
5a7: 49 03 7d 20 add 0x20(%r13),%rdi
5ab: 4c 89 e9 mov %r13,%rcx
5ae: ba 01 00 00 00 mov $0x1,%edx
5b3: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
5ba: 00
5bb: 41 ff 10 callq *(%r8)
5be: 85 c0 test %eax,%eax
5c0: 0f 85 aa 04 00 00 jne a70 <x86_emulate_memop+0x9e3>
5c6: 44 8a a4 24 38 01 00 mov 0x138(%rsp),%r12b
5cd: 00
5ce: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
5d5: 00
index_reg |= (sib >> 3) & 7;
base_reg |= sib & 7;
5d6: 44 89 e0 mov %r12d,%eax
5d9: 48 ff c7 inc %rdi
5dc: 83 e0 07 and $0x7,%eax
5df: 48 89 bc 24 50 01 00 mov %rdi,0x150(%rsp)
5e6: 00
5e7: 09 d8 or %ebx,%eax
scale = sib >> 6;
switch (base_reg) {
5e9: 83 f8 05 cmp $0x5,%eax
5ec: 75 46 jne 634 <x86_emulate_memop+0x5a7>
case 5:
if (modrm_mod != 0)
5ee: 80 7c 24 1f 00 cmpb $0x0,0x1f(%rsp)
5f3: 74 0a je 5ff <x86_emulate_memop+0x572>
modrm_ea += _regs[base_reg];
5f5: 48 8b 9c 24 98 00 00 mov 0x98(%rsp),%rbx
5fc: 00
5fd: eb 3c jmp 63b <x86_emulate_memop+0x5ae>
else
modrm_ea += insn_fetch(s32, 4, _eip);
5ff: 48 8b 1c 24 mov (%rsp),%rbx
603: 49 03 7d 20 add 0x20(%r13),%rdi
607: 4c 89 e9 mov %r13,%rcx
60a: ba 04 00 00 00 mov $0x4,%edx
60f: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
616: 00
617: ff 13 callq *(%rbx)
619: 85 c0 test %eax,%eax
61b: 0f 85 4f 04 00 00 jne a70 <x86_emulate_memop+0x9e3>
621: 48 83 84 24 50 01 00 addq $0x4,0x150(%rsp)
628: 00 04
62a: 48 63 9c 24 38 01 00 movslq 0x138(%rsp),%rbx
631: 00
632: eb 07 jmp 63b <x86_emulate_memop+0x5ae>
break;
default:
modrm_ea += _regs[base_reg];
634: 48 98 cltq
636: 48 8b 5c c4 70 mov 0x70(%rsp,%rax,8),%rbx
63b: 44 88 e0 mov %r12b,%al
}
switch (index_reg) {
63e: c7 44 24 5c 00 00 00 movl $0x0,0x5c(%rsp)
645: 00
646: c0 e8 03 shr $0x3,%al
649: 83 e0 07 and $0x7,%eax
64c: 0b 44 24 58 or 0x58(%rsp),%eax
650: 83 f8 04 cmp $0x4,%eax
653: 74 47 je 69c <x86_emulate_memop+0x60f>
case 4:
break;
default:
modrm_ea += _regs[index_reg] << scale;
655: 48 98 cltq
657: 41 c0 ec 06 shr $0x6,%r12b
65b: 48 8b 44 c4 70 mov 0x70(%rsp,%rax,8),%rax
660: 44 88 e1 mov %r12b,%cl
663: 48 d3 e0 shl %cl,%rax
666: 48 01 c3 add %rax,%rbx
669: eb 31 jmp 69c <x86_emulate_memop+0x60f>
}
break;
case 5:
if (modrm_mod != 0)
66b: 80 7c 24 1f 00 cmpb $0x0,0x1f(%rsp)
670: 75 1b jne 68d <x86_emulate_memop+0x600>
modrm_ea += _regs[modrm_rm];
else if (mode == X86EMUL_MODE_PROT64)
672: 31 db xor %ebx,%ebx
674: 83 7c 24 54 08 cmpl $0x8,0x54(%rsp)
679: c7 44 24 5c 01 00 00 movl $0x1,0x5c(%rsp)
680: 00
681: 74 6f je 6f2 <x86_emulate_memop+0x665>
683: c7 44 24 5c 00 00 00 movl $0x0,0x5c(%rsp)
68a: 00
68b: eb 65 jmp 6f2 <x86_emulate_memop+0x665>
rip_relative = 1;
break;
default:
modrm_ea += _regs[modrm_rm];
68d: 48 98 cltq
68f: 48 8b 5c c4 70 mov 0x70(%rsp,%rax,8),%rbx
694: c7 44 24 5c 00 00 00 movl $0x0,0x5c(%rsp)
69b: 00
break;
}
switch (modrm_mod) {
69c: 80 7c 24 1f 01 cmpb $0x1,0x1f(%rsp)
6a1: 74 11 je 6b4 <x86_emulate_memop+0x627>
6a3: 72 07 jb 6ac <x86_emulate_memop+0x61f>
6a5: 80 7c 24 1f 02 cmpb $0x2,0x1f(%rsp)
6aa: eb 04 jmp 6b0 <x86_emulate_memop+0x623>
case 0:
if (modrm_rm == 5)
6ac: 41 80 fe 05 cmp $0x5,%r14b
6b0: 75 7f jne 731 <x86_emulate_memop+0x6a4>
6b2: eb 3e jmp 6f2 <x86_emulate_memop+0x665>
modrm_ea += insn_fetch(s32, 4, _eip);
break;
case 1:
modrm_ea += insn_fetch(s8, 1, _eip);
6b4: 4c 8b 04 24 mov (%rsp),%r8
6b8: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
6bf: 00
6c0: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
6c7: 00
6c8: 49 03 7d 20 add 0x20(%r13),%rdi
6cc: 4c 89 e9 mov %r13,%rcx
6cf: ba 01 00 00 00 mov $0x1,%edx
6d4: 41 ff 10 callq *(%r8)
6d7: 85 c0 test %eax,%eax
6d9: 0f 85 91 03 00 00 jne a70 <x86_emulate_memop+0x9e3>
6df: 48 ff 84 24 50 01 00 incq 0x150(%rsp)
6e6: 00
6e7: 48 0f be 84 24 38 01 movsbq 0x138(%rsp),%rax
6ee: 00 00
6f0: eb 3c jmp 72e <x86_emulate_memop+0x6a1>
break;
case 2:
modrm_ea += insn_fetch(s32, 4, _eip);
6f2: 4c 8b 04 24 mov (%rsp),%r8
6f6: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
6fd: 00
6fe: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
705: 00
706: 49 03 7d 20 add 0x20(%r13),%rdi
70a: 4c 89 e9 mov %r13,%rcx
70d: ba 04 00 00 00 mov $0x4,%edx
712: 41 ff 10 callq *(%r8)
715: 85 c0 test %eax,%eax
717: 0f 85 53 03 00 00 jne a70 <x86_emulate_memop+0x9e3>
71d: 48 83 84 24 50 01 00 addq $0x4,0x150(%rsp)
724: 00 04
726: 48 63 84 24 38 01 00 movslq 0x138(%rsp),%rax
72d: 00
72e: 48 01 c3 add %rax,%rbx
break;
}
}
if (!override_base)
override_base = &ctxt->ds_base;
731: 48 83 7c 24 40 00 cmpq $0x0,0x40(%rsp)
737: 49 8d 45 28 lea 0x28(%r13),%rax
73b: 48 0f 45 44 24 40 cmovne 0x40(%rsp),%rax
if (mode == X86EMUL_MODE_PROT64 &&
741: 83 7c 24 54 08 cmpl $0x8,0x54(%rsp)
746: 48 89 44 24 40 mov %rax,0x40(%rsp)
74b: 75 21 jne 76e <x86_emulate_memop+0x6e1>
74d: 49 8d 45 48 lea 0x48(%r13),%rax
751: 48 39 44 24 40 cmp %rax,0x40(%rsp)
756: 74 16 je 76e <x86_emulate_memop+0x6e1>
758: 49 8d 45 40 lea 0x40(%r13),%rax
75c: 48 39 44 24 40 cmp %rax,0x40(%rsp)
761: 74 0b je 76e <x86_emulate_memop+0x6e1>
763: 48 c7 44 24 40 00 00 movq $0x0,0x40(%rsp)
76a: 00 00
76c: eb 10 jmp 77e <x86_emulate_memop+0x6f1>
override_base != &ctxt->fs_base &&
override_base != &ctxt->gs_base)
override_base = NULL;
if (override_base)
76e: 48 83 7c 24 40 00 cmpq $0x0,0x40(%rsp)
774: 74 08 je 77e <x86_emulate_memop+0x6f1>
modrm_ea += *override_base;
776: 48 8b 44 24 40 mov 0x40(%rsp),%rax
77b: 48 03 18 add (%rax),%rbx
if (rip_relative) {
77e: 83 7c 24 5c 00 cmpl $0x0,0x5c(%rsp)
783: 74 3b je 7c0 <x86_emulate_memop+0x733>
modrm_ea += _eip;
switch (d & SrcMask) {
785: 8b 44 24 18 mov 0x18(%rsp),%eax
789: 48 03 9c 24 50 01 00 add 0x150(%rsp),%rbx
790: 00
791: 83 e0 38 and $0x38,%eax
794: 83 f8 28 cmp $0x28,%eax
797: 74 07 je 7a0 <x86_emulate_memop+0x713>
799: 83 f8 30 cmp $0x30,%eax
79c: 75 22 jne 7c0 <x86_emulate_memop+0x733>
79e: eb 07 jmp 7a7 <x86_emulate_memop+0x71a>
case SrcImmByte:
modrm_ea += 1;
break;
case SrcImm:
if (d & ByteOp)
7a0: f6 44 24 18 01 testb $0x1,0x18(%rsp)
7a5: 74 05 je 7ac <x86_emulate_memop+0x71f>
modrm_ea += 1;
7a7: 48 ff c3 inc %rbx
7aa: eb 14 jmp 7c0 <x86_emulate_memop+0x733>
else
if (op_bytes == 8)
7ac: 83 7c 24 6c 08 cmpl $0x8,0x6c(%rsp)
7b1: 75 06 jne 7b9 <x86_emulate_memop+0x72c>
modrm_ea += 4;
7b3: 48 83 c3 04 add $0x4,%rbx
7b7: eb 07 jmp 7c0 <x86_emulate_memop+0x733>
else
modrm_ea += op_bytes;
7b9: 8b 44 24 6c mov 0x6c(%rsp),%eax
7bd: 48 01 c3 add %rax,%rbx
}
}
if (ad_bytes != 8)
7c0: 45 31 f6 xor %r14d,%r14d
7c3: 83 7c 24 48 08 cmpl $0x8,0x48(%rsp)
7c8: 49 89 dc mov %rbx,%r12
7cb: 74 03 je 7d0 <x86_emulate_memop+0x743>
modrm_ea = (u32)modrm_ea;
7cd: 41 89 dc mov %ebx,%r12d
cr2 = modrm_ea;
modrm_done:
;
}
/*
* Decode and fetch the source operand: register, memory
* or immediate.
*/
switch (d & SrcMask) {
7d0: 8b 44 24 18 mov 0x18(%rsp),%eax
7d4: 83 e0 38 and $0x38,%eax
7d7: 83 f8 18 cmp $0x18,%eax
7da: 0f 84 d6 00 00 00 je 8b6 <x86_emulate_memop+0x829>
7e0: 77 13 ja 7f5 <x86_emulate_memop+0x768>
7e2: 83 f8 08 cmp $0x8,%eax
7e5: 74 2e je 815 <x86_emulate_memop+0x788>
7e7: 83 f8 10 cmp $0x10,%eax
7ea: 0f 85 a1 02 00 00 jne a91 <x86_emulate_memop+0xa04>
7f0: e9 db 00 00 00 jmpq 8d0 <x86_emulate_memop+0x843>
7f5: 83 f8 28 cmp $0x28,%eax
7f8: 0f 84 34 01 00 00 je 932 <x86_emulate_memop+0x8a5>
7fe: 83 f8 30 cmp $0x30,%eax
801: 0f 84 24 02 00 00 je a2b <x86_emulate_memop+0x99e>
807: 83 f8 20 cmp $0x20,%eax
80a: 0f 85 81 02 00 00 jne a91 <x86_emulate_memop+0xa04>
810: e9 ae 00 00 00 jmpq 8c3 <x86_emulate_memop+0x836>
case SrcNone:
break;
case SrcReg:
src.type = OP_REG;
if (d & ByteOp) {
815: f6 44 24 18 01 testb $0x1,0x18(%rsp)
81a: c7 84 24 10 01 00 00 movl $0x0,0x110(%rsp)
821: 00 00 00 00
825: 48 8d 74 24 70 lea 0x70(%rsp),%rsi
82a: 74 3f je 86b <x86_emulate_memop+0x7de>
src.ptr = decode_register(modrm_reg, _regs,
82c: 31 d2 xor %edx,%edx
82e: 80 7c 24 1e 00 cmpb $0x0,0x1e(%rsp)
833: 0f b6 7c 24 20 movzbl 0x20(%rsp),%edi
838: 0f 94 c2 sete %dl
83b: e8 c9 f7 ff ff callq 9 <decode_register>
840: 48 89 84 24 28 01 00 mov %rax,0x128(%rsp)
847: 00
(rex_prefix == 0));
src.val = src.orig_val = *(u8 *) src.ptr;
848: 0f b6 00 movzbl (%rax),%eax
src.bytes = 1;
84b: c7 84 24 14 01 00 00 movl $0x1,0x114(%rsp)
852: 01 00 00 00
856: 48 89 84 24 20 01 00 mov %rax,0x120(%rsp)
85d: 00
85e: 48 89 84 24 18 01 00 mov %rax,0x118(%rsp)
865: 00
866: e9 26 02 00 00 jmpq a91 <x86_emulate_memop+0xa04>
} else {
src.ptr = decode_register(modrm_reg, _regs, 0);
86b: 0f b6 7c 24 20 movzbl 0x20(%rsp),%edi
870: 31 d2 xor %edx,%edx
872: e8 92 f7 ff ff callq 9 <decode_register>
switch ((src.bytes = op_bytes)) {
877: 8b 54 24 6c mov 0x6c(%rsp),%edx
87b: 48 89 84 24 28 01 00 mov %rax,0x128(%rsp)
882: 00
883: 83 fa 04 cmp $0x4,%edx
886: 89 94 24 14 01 00 00 mov %edx,0x114(%rsp)
88d: 74 13 je 8a2 <x86_emulate_memop+0x815>
88f: 83 fa 08 cmp $0x8,%edx
892: 74 1d je 8b1 <x86_emulate_memop+0x824>
894: 83 fa 02 cmp $0x2,%edx
897: 0f 85 f4 01 00 00 jne a91 <x86_emulate_memop+0xa04>
case 2:
src.val = src.orig_val = *(u16 *) src.ptr;
89d: 0f b7 00 movzwl (%rax),%eax
8a0: eb 02 jmp 8a4 <x86_emulate_memop+0x817>
break;
case 4:
src.val = src.orig_val = *(u32 *) src.ptr;
8a2: 8b 00 mov (%rax),%eax
8a4: 48 89 84 24 20 01 00 mov %rax,0x120(%rsp)
8ab: 00
8ac: e9 d8 01 00 00 jmpq a89 <x86_emulate_memop+0x9fc>
break;
case 8:
src.val = src.orig_val = *(u64 *) src.ptr;
8b1: 48 8b 00 mov (%rax),%rax
8b4: eb ee jmp 8a4 <x86_emulate_memop+0x817>
break;
}
}
break;
case SrcMem16:
src.bytes = 2;
8b6: c7 84 24 14 01 00 00 movl $0x2,0x114(%rsp)
8bd: 02 00 00 00
8c1: eb 23 jmp 8e6 <x86_emulate_memop+0x859>
goto srcmem_common;
case SrcMem32:
src.bytes = 4;
8c3: c7 84 24 14 01 00 00 movl $0x4,0x114(%rsp)
8ca: 04 00 00 00
8ce: eb 16 jmp 8e6 <x86_emulate_memop+0x859>
goto srcmem_common;
case SrcMem:
src.bytes = (d & ByteOp) ? 1 : op_bytes;
8d0: f6 44 24 18 01 testb $0x1,0x18(%rsp)
8d5: b8 01 00 00 00 mov $0x1,%eax
8da: 0f 44 44 24 6c cmove 0x6c(%rsp),%eax
8df: 89 84 24 14 01 00 00 mov %eax,0x114(%rsp)
srcmem_common:
src.type = OP_MEM;
src.ptr = (unsigned long *)cr2;
if ((rc = ops->read_emulated((unsigned long)src.ptr,
8e6: 48 8b 1c 24 mov (%rsp),%rbx
8ea: c7 84 24 10 01 00 00 movl $0x1,0x110(%rsp)
8f1: 01 00 00 00
8f5: 48 8d b4 24 18 01 00 lea 0x118(%rsp),%rsi
8fc: 00
8fd: 4c 89 a4 24 28 01 00 mov %r12,0x128(%rsp)
904: 00
905: 8b 94 24 14 01 00 00 mov 0x114(%rsp),%edx
90c: 4c 89 e9 mov %r13,%rcx
90f: 4c 89 e7 mov %r12,%rdi
912: ff 53 10 callq *0x10(%rbx)
915: 85 c0 test %eax,%eax
917: 0f 85 53 01 00 00 jne a70 <x86_emulate_memop+0x9e3>
&src.val, src.bytes, ctxt)) != 0)
goto done;
src.orig_val = src.val;
91d: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
924: 00
925: 48 89 84 24 20 01 00 mov %rax,0x120(%rsp)
92c: 00
92d: e9 5f 01 00 00 jmpq a91 <x86_emulate_memop+0xa04>
break;
case SrcImm:
src.type = OP_IMM;
src.ptr = (unsigned long *)_eip;
src.bytes = (d & ByteOp) ? 1 : op_bytes;
932: f6 44 24 18 01 testb $0x1,0x18(%rsp)
937: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
93e: 00
93f: c7 84 24 10 01 00 00 movl $0x2,0x110(%rsp)
946: 02 00 00 00
94a: 48 89 bc 24 28 01 00 mov %rdi,0x128(%rsp)
951: 00
952: 0f 85 fe 31 00 00 jne 3b56 <x86_emulate_memop+0x3ac9>
958: 44 8b 44 24 6c mov 0x6c(%rsp),%r8d
if (src.bytes == 8)
95d: 41 83 f8 08 cmp $0x8,%r8d
961: 44 89 84 24 14 01 00 mov %r8d,0x114(%rsp)
968: 00
969: 75 0d jne 978 <x86_emulate_memop+0x8eb>
src.bytes = 4;
96b: c7 84 24 14 01 00 00 movl $0x4,0x114(%rsp)
972: 04 00 00 00
976: eb 7a jmp 9f2 <x86_emulate_memop+0x965>
/* NB. Immediates are sign-extended as necessary. */
switch (src.bytes) {
978: 83 7c 24 6c 02 cmpl $0x2,0x6c(%rsp)
97d: 74 39 je 9b8 <x86_emulate_memop+0x92b>
97f: 83 7c 24 6c 04 cmpl $0x4,0x6c(%rsp)
984: 74 6c je 9f2 <x86_emulate_memop+0x965>
986: 83 7c 24 6c 01 cmpl $0x1,0x6c(%rsp)
98b: 0f 85 00 01 00 00 jne a91 <x86_emulate_memop+0xa04>
case 1:
src.val = insn_fetch(s8, 1, _eip);
991: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
998: 00
999: 49 03 7d 20 add 0x20(%r13),%rdi
99d: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
9a4: 00
9a5: 48 8b 1c 24 mov (%rsp),%rbx
9a9: 4c 89 e9 mov %r13,%rcx
9ac: ba 01 00 00 00 mov $0x1,%edx
9b1: ff 13 callq *(%rbx)
9b3: e9 b4 00 00 00 jmpq a6c <x86_emulate_memop+0x9df>
break;
case 2:
src.val = insn_fetch(s16, 2, _eip);
9b8: 4c 8b 04 24 mov (%rsp),%r8
9bc: 49 03 7d 20 add 0x20(%r13),%rdi
9c0: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
9c7: 00
9c8: 4c 89 e9 mov %r13,%rcx
9cb: ba 02 00 00 00 mov $0x2,%edx
9d0: 41 ff 10 callq *(%r8)
9d3: 85 c0 test %eax,%eax
9d5: 0f 85 95 00 00 00 jne a70 <x86_emulate_memop+0x9e3>
9db: 48 83 84 24 50 01 00 addq $0x2,0x150(%rsp)
9e2: 00 02
9e4: 48 0f bf 84 24 38 01 movswq 0x138(%rsp),%rax
9eb: 00 00
9ed: e9 97 00 00 00 jmpq a89 <x86_emulate_memop+0x9fc>
break;
case 4:
src.val = insn_fetch(s32, 4, _eip);
9f2: 48 8b 1c 24 mov (%rsp),%rbx
9f6: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
9fd: 00
9fe: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
a05: 00
a06: 49 03 7d 20 add 0x20(%r13),%rdi
a0a: 4c 89 e9 mov %r13,%rcx
a0d: ba 04 00 00 00 mov $0x4,%edx
a12: ff 13 callq *(%rbx)
a14: 85 c0 test %eax,%eax
a16: 75 58 jne a70 <x86_emulate_memop+0x9e3>
a18: 48 83 84 24 50 01 00 addq $0x4,0x150(%rsp)
a1f: 00 04
a21: 48 63 84 24 38 01 00 movslq 0x138(%rsp),%rax
a28: 00
a29: eb 5e jmp a89 <x86_emulate_memop+0x9fc>
break;
}
break;
case SrcImmByte:
src.type = OP_IMM;
src.ptr = (unsigned long *)_eip;
a2b: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
a32: 00
src.bytes = 1;
src.val = insn_fetch(s8, 1, _eip);
a33: 4c 8b 04 24 mov (%rsp),%r8
a37: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
a3e: 00
a3f: c7 84 24 10 01 00 00 movl $0x2,0x110(%rsp)
a46: 02 00 00 00
a4a: c7 84 24 14 01 00 00 movl $0x1,0x114(%rsp)
a51: 01 00 00 00
a55: 4c 89 e9 mov %r13,%rcx
a58: ba 01 00 00 00 mov $0x1,%edx
a5d: 48 89 bc 24 28 01 00 mov %rdi,0x128(%rsp)
a64: 00
a65: 49 03 7d 20 add 0x20(%r13),%rdi
a69: 41 ff 10 callq *(%r8)
a6c: 85 c0 test %eax,%eax
a6e: 74 08 je a78 <x86_emulate_memop+0x9eb>
a70: 41 89 c7 mov %eax,%r15d
a73: e9 2c 21 00 00 jmpq 2ba4 <x86_emulate_memop+0x2b17>
a78: 48 ff 84 24 50 01 00 incq 0x150(%rsp)
a7f: 00
a80: 48 0f be 84 24 38 01 movsbq 0x138(%rsp),%rax
a87: 00 00
a89: 48 89 84 24 18 01 00 mov %rax,0x118(%rsp)
a90: 00
break;
}
/* Decode and fetch the destination operand: register or memory. */
switch (d & DstMask) {
a91: 8b 44 24 18 mov 0x18(%rsp),%eax
a95: 83 e0 06 and $0x6,%eax
a98: 83 f8 04 cmp $0x4,%eax
a9b: 74 17 je ab4 <x86_emulate_memop+0xa27>
a9d: 83 f8 06 cmp $0x6,%eax
aa0: 0f 84 ba 00 00 00 je b60 <x86_emulate_memop+0xad3>
aa6: 83 f8 02 cmp $0x2,%eax
aa9: 0f 85 2f 01 00 00 jne bde <x86_emulate_memop+0xb51>
aaf: e9 00 21 00 00 jmpq 2bb4 <x86_emulate_memop+0x2b27>
case ImplicitOps:
/* Special instructions do their own operand decoding. */
goto special_insn;
case DstReg:
dst.type = OP_REG;
if ((d & ByteOp)
ab4: f6 44 24 18 01 testb $0x1,0x18(%rsp)
ab9: c7 84 24 f0 00 00 00 movl $0x0,0xf0(%rsp)
ac0: 00 00 00 00
ac4: 74 4a je b10 <x86_emulate_memop+0xa83>
ac6: 80 7c 24 1d 00 cmpb $0x0,0x1d(%rsp)
acb: 74 07 je ad4 <x86_emulate_memop+0xa47>
acd: 8d 45 4a lea 0x4a(%rbp),%eax
ad0: 3c 01 cmp $0x1,%al
ad2: 76 3c jbe b10 <x86_emulate_memop+0xa83>
&& !(twobyte && (b == 0xb6 || b == 0xb7))) {
dst.ptr = decode_register(modrm_reg, _regs,
ad4: 31 d2 xor %edx,%edx
ad6: 80 7c 24 1e 00 cmpb $0x0,0x1e(%rsp)
adb: 0f b6 7c 24 20 movzbl 0x20(%rsp),%edi
ae0: 48 8d 74 24 70 lea 0x70(%rsp),%rsi
ae5: 0f 94 c2 sete %dl
ae8: e8 1c f5 ff ff callq 9 <decode_register>
aed: 48 89 84 24 08 01 00 mov %rax,0x108(%rsp)
af4: 00
(rex_prefix == 0));
dst.val = *(u8 *) dst.ptr;
af5: 0f b6 00 movzbl (%rax),%eax
dst.bytes = 1;
af8: c7 84 24 f4 00 00 00 movl $0x1,0xf4(%rsp)
aff: 01 00 00 00
b03: 48 89 84 24 f8 00 00 mov %rax,0xf8(%rsp)
b0a: 00
b0b: e9 ce 00 00 00 jmpq bde <x86_emulate_memop+0xb51>
} else {
dst.ptr = decode_register(modrm_reg, _regs, 0);
b10: 0f b6 7c 24 20 movzbl 0x20(%rsp),%edi
b15: 48 8d 74 24 70 lea 0x70(%rsp),%rsi
b1a: 31 d2 xor %edx,%edx
b1c: e8 e8 f4 ff ff callq 9 <decode_register>
switch ((dst.bytes = op_bytes)) {
b21: 8b 54 24 6c mov 0x6c(%rsp),%edx
b25: 48 89 84 24 08 01 00 mov %rax,0x108(%rsp)
b2c: 00
b2d: 83 fa 04 cmp $0x4,%edx
b30: 89 94 24 f4 00 00 00 mov %edx,0xf4(%rsp)
b37: 74 13 je b4c <x86_emulate_memop+0xabf>
b39: 83 fa 08 cmp $0x8,%edx
b3c: 74 1d je b5b <x86_emulate_memop+0xace>
b3e: 83 fa 02 cmp $0x2,%edx
b41: 0f 85 97 00 00 00 jne bde <x86_emulate_memop+0xb51>
case 2:
dst.val = *(u16 *)dst.ptr;
b47: 0f b7 00 movzwl (%rax),%eax
b4a: eb 02 jmp b4e <x86_emulate_memop+0xac1>
break;
case 4:
dst.val = *(u32 *)dst.ptr;
b4c: 8b 00 mov (%rax),%eax
b4e: 48 89 84 24 f8 00 00 mov %rax,0xf8(%rsp)
b55: 00
b56: e9 83 00 00 00 jmpq bde <x86_emulate_memop+0xb51>
break;
case 8:
dst.val = *(u64 *)dst.ptr;
b5b: 48 8b 00 mov (%rax),%rax
b5e: eb ee jmp b4e <x86_emulate_memop+0xac1>
break;
}
}
break;
case DstMem:
dst.type = OP_MEM;
dst.ptr = (unsigned long *)cr2;
dst.bytes = (d & ByteOp) ? 1 : op_bytes;
b60: f6 44 24 18 01 testb $0x1,0x18(%rsp)
b65: b8 01 00 00 00 mov $0x1,%eax
b6a: c7 84 24 f0 00 00 00 movl $0x1,0xf0(%rsp)
b71: 01 00 00 00
b75: 0f 44 44 24 6c cmove 0x6c(%rsp),%eax
if (d & BitOp) {
b7a: f7 44 24 18 00 01 00 testl $0x100,0x18(%rsp)
b81: 00
b82: 4c 89 a4 24 08 01 00 mov %r12,0x108(%rsp)
b89: 00
b8a: 89 84 24 f4 00 00 00 mov %eax,0xf4(%rsp)
b91: 74 18 je bab <x86_emulate_memop+0xb1e>
unsigned long mask = ~(dst.bytes * 8 - 1);
dst.ptr = (void *)dst.ptr + (src.val & mask) / 8;
b93: c1 e0 03 shl $0x3,%eax
b96: f7 d8 neg %eax
b98: 23 84 24 18 01 00 00 and 0x118(%rsp),%eax
b9f: 48 c1 e8 03 shr $0x3,%rax
ba3: 48 01 84 24 08 01 00 add %rax,0x108(%rsp)
baa: 00
}
if (!(d & Mov) && /* optimisation - avoid slow emulated read */
bab: 80 7c 24 18 00 cmpb $0x0,0x18(%rsp)
bb0: 78 2c js bde <x86_emulate_memop+0xb51>
bb2: 48 8b 1c 24 mov (%rsp),%rbx
bb6: 8b 94 24 f4 00 00 00 mov 0xf4(%rsp),%edx
bbd: 48 8d b4 24 f8 00 00 lea 0xf8(%rsp),%rsi
bc4: 00
bc5: 48 8b bc 24 08 01 00 mov 0x108(%rsp),%rdi
bcc: 00
bcd: 4c 89 e9 mov %r13,%rcx
bd0: ff 53 10 callq *0x10(%rbx)
bd3: 85 c0 test %eax,%eax
bd5: 41 89 c7 mov %eax,%r15d
bd8: 0f 85 c6 1f 00 00 jne 2ba4 <x86_emulate_memop+0x2b17>
((rc = ops->read_emulated((unsigned long)dst.ptr,
&dst.val, dst.bytes, ctxt)) != 0))
goto done;
break;
}
dst.orig_val = dst.val;
if (twobyte)
bde: 80 7c 24 1d 00 cmpb $0x0,0x1d(%rsp)
be3: 48 8b 94 24 f8 00 00 mov 0xf8(%rsp),%rdx
bea: 00
beb: 48 89 94 24 00 01 00 mov %rdx,0x100(%rsp)
bf2: 00
bf3: 0f 85 2a 24 00 00 jne 3023 <x86_emulate_memop+0x2f96>
goto twobyte_insn;
switch (b) {
bf9: 40 80 fd 85 cmp $0x85,%bpl
bfd: 0f 87 b0 00 00 00 ja cb3 <x86_emulate_memop+0xc26>
c03: 40 80 fd 84 cmp $0x84,%bpl
c07: 0f 83 67 0c 00 00 jae 1874 <x86_emulate_memop+0x17e7>
c0d: 40 80 fd 25 cmp $0x25,%bpl
c11: 77 4d ja c60 <x86_emulate_memop+0xbd3>
c13: 40 80 fd 20 cmp $0x20,%bpl
c17: 0f 83 ac 06 00 00 jae 12c9 <x86_emulate_memop+0x123c>
c1d: 40 80 fd 0d cmp $0xd,%bpl
c21: 77 19 ja c3c <x86_emulate_memop+0xbaf>
c23: 40 80 fd 08 cmp $0x8,%bpl
c27: 0f 83 91 02 00 00 jae ebe <x86_emulate_memop+0xe31>
c2d: 40 80 fd 05 cmp $0x5,%bpl
c31: 0f 87 56 1e 00 00 ja 2a8d <x86_emulate_memop+0x2a00>
c37: e9 25 01 00 00 jmpq d61 <x86_emulate_memop+0xcd4>
c3c: 40 80 fd 10 cmp $0x10,%bpl
c40: 0f 82 47 1e 00 00 jb 2a8d <x86_emulate_memop+0x2a00>
c46: 40 80 fd 15 cmp $0x15,%bpl
c4a: 0f 86 cb 03 00 00 jbe 101b <x86_emulate_memop+0xf8e>
c50: 8d 45 e8 lea 0xffffffffffffffe8(%rbp),%eax
c53: 3c 05 cmp $0x5,%al
c55: 0f 87 32 1e 00 00 ja 2a8d <x86_emulate_memop+0x2a00>
c5b: e9 18 05 00 00 jmpq 1178 <x86_emulate_memop+0x10eb>
c60: 40 80 fd 3d cmp $0x3d,%bpl
c64: 77 2e ja c94 <x86_emulate_memop+0xc07>
c66: 40 80 fd 38 cmp $0x38,%bpl
c6a: 0f 83 70 0a 00 00 jae 16e0 <x86_emulate_memop+0x1653>
c70: 40 80 fd 28 cmp $0x28,%bpl
c74: 0f 82 13 1e 00 00 jb 2a8d <x86_emulate_memop+0x2a00>
c7a: 40 80 fd 2d cmp $0x2d,%bpl
c7e: 0f 86 a2 07 00 00 jbe 1426 <x86_emulate_memop+0x1399>
c84: 8d 45 d0 lea 0xffffffffffffffd0(%rbp),%eax
c87: 3c 05 cmp $0x5,%al
c89: 0f 87 fe 1d 00 00 ja 2a8d <x86_emulate_memop+0x2a00>
c8f: e9 ef 08 00 00 jmpq 1583 <x86_emulate_memop+0x14f6>
c94: 40 80 fd 63 cmp $0x63,%bpl
c98: 0f 84 9f 0b 00 00 je 183d <x86_emulate_memop+0x17b0>
c9e: 0f 82 e9 1d 00 00 jb 2a8d <x86_emulate_memop+0x2a00>
ca4: 40 80 fd 80 cmp $0x80,%bpl
ca8: 0f 82 df 1d 00 00 jb 2a8d <x86_emulate_memop+0x2a00>
cae: e9 aa 0b 00 00 jmpq 185d <x86_emulate_memop+0x17d0>
cb3: 40 80 fd c1 cmp $0xc1,%bpl
cb7: 77 52 ja d0b <x86_emulate_memop+0xc7e>
cb9: 40 80 fd c0 cmp $0xc0,%bpl
cbd: 0f 83 98 0e 00 00 jae 1b5b <x86_emulate_memop+0x1ace>
cc3: 40 80 fd 8f cmp $0x8f,%bpl
cc7: 0f 84 ce 0d 00 00 je 1a9b <x86_emulate_memop+0x1a0e>
ccd: 77 19 ja ce8 <x86_emulate_memop+0xc5b>
ccf: 40 80 fd 87 cmp $0x87,%bpl
cd3: 0f 86 ec 0c 00 00 jbe 19c5 <x86_emulate_memop+0x1938>
cd9: 40 80 fd 8b cmp $0x8b,%bpl
cdd: 0f 87 aa 1d 00 00 ja 2a8d <x86_emulate_memop+0x2a00>
ce3: e9 9e 0d 00 00 jmpq 1a86 <x86_emulate_memop+0x19f9>
ce8: 40 80 fd a0 cmp $0xa0,%bpl
cec: 0f 82 9b 1d 00 00 jb 2a8d <x86_emulate_memop+0x2a00>
cf2: 40 80 fd a1 cmp $0xa1,%bpl
cf6: 0f 86 3e 0d 00 00 jbe 1a3a <x86_emulate_memop+0x19ad>
cfc: 40 80 fd a3 cmp $0xa3,%bpl
d00: 0f 87 87 1d 00 00 ja 2a8d <x86_emulate_memop+0x2a00>
d06: e9 5d 0d 00 00 jmpq 1a68 <x86_emulate_memop+0x19db>
d0b: 40 80 fd d3 cmp $0xd3,%bpl
d0f: 77 2d ja d3e <x86_emulate_memop+0xcb1>
d11: 40 80 fd d2 cmp $0xd2,%bpl
d15: 0f 83 76 17 00 00 jae 2491 <x86_emulate_memop+0x2404>
d1b: 40 80 fd c6 cmp $0xc6,%bpl
d1f: 0f 82 68 1d 00 00 jb 2a8d <x86_emulate_memop+0x2a00>
d25: 40 80 fd c7 cmp $0xc7,%bpl
d29: 0f 86 57 0d 00 00 jbe 1a86 <x86_emulate_memop+0x19f9>
d2f: 40 80 fd d0 cmp $0xd0,%bpl
d33: 0f 82 54 1d 00 00 jb 2a8d <x86_emulate_memop+0x2a00>
d39: e9 42 17 00 00 jmpq 2480 <x86_emulate_memop+0x23f3>
d3e: 40 80 fd f6 cmp $0xf6,%bpl
d42: 0f 82 45 1d 00 00 jb 2a8d <x86_emulate_memop+0x2a00>
d48: 40 80 fd f7 cmp $0xf7,%bpl
d4c: 0f 86 51 17 00 00 jbe 24a3 <x86_emulate_memop+0x2416>
d52: 40 80 fd fe cmp $0xfe,%bpl
d56: 0f 82 31 1d 00 00 jb 2a8d <x86_emulate_memop+0x2a00>
d5c: e9 ce 19 00 00 jmpq 272f <x86_emulate_memop+0x26a2>
case 0x00 ... 0x05:
add: /* add */
emulate_2op_SrcV("add", src, dst, _eflags);
d61: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
d68: 83 f8 01 cmp $0x1,%eax
d6b: 75 4b jne db8 <x86_emulate_memop+0xd2b>
d6d: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
d74: 00
d75: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
d7c: bd d5 08 00 00 mov $0x8d5,%ebp
d81: 21 2c 24 and %ebp,(%rsp)
d84: 9c pushfq
d85: f7 d5 not %ebp
d87: 21 2c 24 and %ebp,(%rsp)
d8a: 5d pop %rbp
d8b: 09 2c 24 or %ebp,(%rsp)
d8e: 9d popfq
d8f: bd d5 08 00 00 mov $0x8d5,%ebp
d94: f7 d5 not %ebp
d96: 21 ac 24 48 01 00 00 and %ebp,0x148(%rsp)
d9d: 00 84 24 f8 00 00 00 add %al,0xf8(%rsp)
da4: 9c pushfq
da5: 5d pop %rbp
da6: 81 e5 d5 08 00 00 and $0x8d5,%ebp
dac: 09 ac 24 48 01 00 00 or %ebp,0x148(%rsp)
db3: e9 d5 1c 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
db8: 83 f8 04 cmp $0x4,%eax
dbb: 74 6a je e27 <x86_emulate_memop+0xd9a>
dbd: 83 f8 08 cmp $0x8,%eax
dc0: 0f 84 ac 00 00 00 je e72 <x86_emulate_memop+0xde5>
dc6: 83 f8 02 cmp $0x2,%eax
dc9: 0f 85 be 1c 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
dcf: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
dd6: 00
dd7: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
dde: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
de4: 44 21 04 24 and %r8d,(%rsp)
de8: 9c pushfq
de9: 41 f7 d0 not %r8d
dec: 44 21 04 24 and %r8d,(%rsp)
df0: 41 58 pop %r8
df2: 44 09 04 24 or %r8d,(%rsp)
df6: 9d popfq
df7: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
dfd: 41 f7 d0 not %r8d
e00: 44 21 84 24 48 01 00 and %r8d,0x148(%rsp)
e07: 00
e08: 66 01 84 24 f8 00 00 add %ax,0xf8(%rsp)
e0f: 00
e10: 9c pushfq
e11: 41 58 pop %r8
e13: 41 81 e0 d5 08 00 00 and $0x8d5,%r8d
e1a: 44 09 84 24 48 01 00 or %r8d,0x148(%rsp)
e21: 00
e22: e9 66 1c 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
e27: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
e2e: 00
e2f: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
e36: ba d5 08 00 00 mov $0x8d5,%edx
e3b: 21 14 24 and %edx,(%rsp)
e3e: 9c pushfq
e3f: f7 d2 not %edx
e41: 21 14 24 and %edx,(%rsp)
e44: 5a pop %rdx
e45: 09 14 24 or %edx,(%rsp)
e48: 9d popfq
e49: ba d5 08 00 00 mov $0x8d5,%edx
e4e: f7 d2 not %edx
e50: 21 94 24 48 01 00 00 and %edx,0x148(%rsp)
e57: 01 84 24 f8 00 00 00 add %eax,0xf8(%rsp)
e5e: 9c pushfq
e5f: 5a pop %rdx
e60: 81 e2 d5 08 00 00 and $0x8d5,%edx
e66: 09 94 24 48 01 00 00 or %edx,0x148(%rsp)
e6d: e9 1b 1c 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
e72: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
e79: 00
e7a: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
e81: b9 d5 08 00 00 mov $0x8d5,%ecx
e86: 21 0c 24 and %ecx,(%rsp)
e89: 9c pushfq
e8a: f7 d1 not %ecx
e8c: 21 0c 24 and %ecx,(%rsp)
e8f: 59 pop %rcx
e90: 09 0c 24 or %ecx,(%rsp)
e93: 9d popfq
e94: b9 d5 08 00 00 mov $0x8d5,%ecx
e99: f7 d1 not %ecx
e9b: 21 8c 24 48 01 00 00 and %ecx,0x148(%rsp)
ea2: 48 01 84 24 f8 00 00 add %rax,0xf8(%rsp)
ea9: 00
eaa: 9c pushfq
eab: 59 pop %rcx
eac: 81 e1 d5 08 00 00 and $0x8d5,%ecx
eb2: 09 8c 24 48 01 00 00 or %ecx,0x148(%rsp)
eb9: e9 cf 1b 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0x08 ... 0x0d:
or: /* or */
emulate_2op_SrcV("or", src, dst, _eflags);
ebe: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
ec5: 83 f8 01 cmp $0x1,%eax
ec8: 75 4b jne f15 <x86_emulate_memop+0xe88>
eca: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
ed1: 00
ed2: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
ed9: bb d5 08 00 00 mov $0x8d5,%ebx
ede: 21 1c 24 and %ebx,(%rsp)
ee1: 9c pushfq
ee2: f7 d3 not %ebx
ee4: 21 1c 24 and %ebx,(%rsp)
ee7: 5b pop %rbx
ee8: 09 1c 24 or %ebx,(%rsp)
eeb: 9d popfq
eec: bb d5 08 00 00 mov $0x8d5,%ebx
ef1: f7 d3 not %ebx
ef3: 21 9c 24 48 01 00 00 and %ebx,0x148(%rsp)
efa: 08 84 24 f8 00 00 00 or %al,0xf8(%rsp)
f01: 9c pushfq
f02: 5b pop %rbx
f03: 81 e3 d5 08 00 00 and $0x8d5,%ebx
f09: 09 9c 24 48 01 00 00 or %ebx,0x148(%rsp)
f10: e9 78 1b 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
f15: 83 f8 04 cmp $0x4,%eax
f18: 74 5e je f78 <x86_emulate_memop+0xeeb>
f1a: 83 f8 08 cmp $0x8,%eax
f1d: 0f 84 ac 00 00 00 je fcf <x86_emulate_memop+0xf42>
f23: 83 f8 02 cmp $0x2,%eax
f26: 0f 85 61 1b 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
f2c: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
f33: 00
f34: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
f3b: bd d5 08 00 00 mov $0x8d5,%ebp
f40: 21 2c 24 and %ebp,(%rsp)
f43: 9c pushfq
f44: f7 d5 not %ebp
f46: 21 2c 24 and %ebp,(%rsp)
f49: 5d pop %rbp
f4a: 09 2c 24 or %ebp,(%rsp)
f4d: 9d popfq
f4e: bd d5 08 00 00 mov $0x8d5,%ebp
f53: f7 d5 not %ebp
f55: 21 ac 24 48 01 00 00 and %ebp,0x148(%rsp)
f5c: 66 09 84 24 f8 00 00 or %ax,0xf8(%rsp)
f63: 00
f64: 9c pushfq
f65: 5d pop %rbp
f66: 81 e5 d5 08 00 00 and $0x8d5,%ebp
f6c: 09 ac 24 48 01 00 00 or %ebp,0x148(%rsp)
f73: e9 15 1b 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
f78: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
f7f: 00
f80: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
f87: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
f8d: 44 21 04 24 and %r8d,(%rsp)
f91: 9c pushfq
f92: 41 f7 d0 not %r8d
f95: 44 21 04 24 and %r8d,(%rsp)
f99: 41 58 pop %r8
f9b: 44 09 04 24 or %r8d,(%rsp)
f9f: 9d popfq
fa0: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
fa6: 41 f7 d0 not %r8d
fa9: 44 21 84 24 48 01 00 and %r8d,0x148(%rsp)
fb0: 00
fb1: 09 84 24 f8 00 00 00 or %eax,0xf8(%rsp)
fb8: 9c pushfq
fb9: 41 58 pop %r8
fbb: 41 81 e0 d5 08 00 00 and $0x8d5,%r8d
fc2: 44 09 84 24 48 01 00 or %r8d,0x148(%rsp)
fc9: 00
fca: e9 be 1a 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
fcf: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
fd6: 00
fd7: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
fde: ba d5 08 00 00 mov $0x8d5,%edx
fe3: 21 14 24 and %edx,(%rsp)
fe6: 9c pushfq
fe7: f7 d2 not %edx
fe9: 21 14 24 and %edx,(%rsp)
fec: 5a pop %rdx
fed: 09 14 24 or %edx,(%rsp)
ff0: 9d popfq
ff1: ba d5 08 00 00 mov $0x8d5,%edx
ff6: f7 d2 not %edx
ff8: 21 94 24 48 01 00 00 and %edx,0x148(%rsp)
fff: 48 09 84 24 f8 00 00 or %rax,0xf8(%rsp)
1006: 00
1007: 9c pushfq
1008: 5a pop %rdx
1009: 81 e2 d5 08 00 00 and $0x8d5,%edx
100f: 09 94 24 48 01 00 00 or %edx,0x148(%rsp)
1016: e9 72 1a 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0x10 ... 0x15:
adc: /* adc */
emulate_2op_SrcV("adc", src, dst, _eflags);
101b: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
1022: 83 f8 01 cmp $0x1,%eax
1025: 75 4b jne 1072 <x86_emulate_memop+0xfe5>
1027: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
102e: 00
102f: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1036: b9 d5 08 00 00 mov $0x8d5,%ecx
103b: 21 0c 24 and %ecx,(%rsp)
103e: 9c pushfq
103f: f7 d1 not %ecx
1041: 21 0c 24 and %ecx,(%rsp)
1044: 59 pop %rcx
1045: 09 0c 24 or %ecx,(%rsp)
1048: 9d popfq
1049: b9 d5 08 00 00 mov $0x8d5,%ecx
104e: f7 d1 not %ecx
1050: 21 8c 24 48 01 00 00 and %ecx,0x148(%rsp)
1057: 10 84 24 f8 00 00 00 adc %al,0xf8(%rsp)
105e: 9c pushfq
105f: 59 pop %rcx
1060: 81 e1 d5 08 00 00 and $0x8d5,%ecx
1066: 09 8c 24 48 01 00 00 or %ecx,0x148(%rsp)
106d: e9 1b 1a 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1072: 83 f8 04 cmp $0x4,%eax
1075: 74 5e je 10d5 <x86_emulate_memop+0x1048>
1077: 83 f8 08 cmp $0x8,%eax
107a: 0f 84 a0 00 00 00 je 1120 <x86_emulate_memop+0x1093>
1080: 83 f8 02 cmp $0x2,%eax
1083: 0f 85 04 1a 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
1089: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1090: 00
1091: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1098: bb d5 08 00 00 mov $0x8d5,%ebx
109d: 21 1c 24 and %ebx,(%rsp)
10a0: 9c pushfq
10a1: f7 d3 not %ebx
10a3: 21 1c 24 and %ebx,(%rsp)
10a6: 5b pop %rbx
10a7: 09 1c 24 or %ebx,(%rsp)
10aa: 9d popfq
10ab: bb d5 08 00 00 mov $0x8d5,%ebx
10b0: f7 d3 not %ebx
10b2: 21 9c 24 48 01 00 00 and %ebx,0x148(%rsp)
10b9: 66 11 84 24 f8 00 00 adc %ax,0xf8(%rsp)
10c0: 00
10c1: 9c pushfq
10c2: 5b pop %rbx
10c3: 81 e3 d5 08 00 00 and $0x8d5,%ebx
10c9: 09 9c 24 48 01 00 00 or %ebx,0x148(%rsp)
10d0: e9 b8 19 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
10d5: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
10dc: 00
10dd: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
10e4: bd d5 08 00 00 mov $0x8d5,%ebp
10e9: 21 2c 24 and %ebp,(%rsp)
10ec: 9c pushfq
10ed: f7 d5 not %ebp
10ef: 21 2c 24 and %ebp,(%rsp)
10f2: 5d pop %rbp
10f3: 09 2c 24 or %ebp,(%rsp)
10f6: 9d popfq
10f7: bd d5 08 00 00 mov $0x8d5,%ebp
10fc: f7 d5 not %ebp
10fe: 21 ac 24 48 01 00 00 and %ebp,0x148(%rsp)
1105: 11 84 24 f8 00 00 00 adc %eax,0xf8(%rsp)
110c: 9c pushfq
110d: 5d pop %rbp
110e: 81 e5 d5 08 00 00 and $0x8d5,%ebp
1114: 09 ac 24 48 01 00 00 or %ebp,0x148(%rsp)
111b: e9 6d 19 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1120: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1127: 00
1128: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
112f: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
1135: 44 21 04 24 and %r8d,(%rsp)
1139: 9c pushfq
113a: 41 f7 d0 not %r8d
113d: 44 21 04 24 and %r8d,(%rsp)
1141: 41 58 pop %r8
1143: 44 09 04 24 or %r8d,(%rsp)
1147: 9d popfq
1148: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
114e: 41 f7 d0 not %r8d
1151: 44 21 84 24 48 01 00 and %r8d,0x148(%rsp)
1158: 00
1159: 48 11 84 24 f8 00 00 adc %rax,0xf8(%rsp)
1160: 00
1161: 9c pushfq
1162: 41 58 pop %r8
1164: 41 81 e0 d5 08 00 00 and $0x8d5,%r8d
116b: 44 09 84 24 48 01 00 or %r8d,0x148(%rsp)
1172: 00
1173: e9 15 19 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0x18 ... 0x1d:
sbb: /* sbb */
emulate_2op_SrcV("sbb", src, dst, _eflags);
1178: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
117f: 83 f8 01 cmp $0x1,%eax
1182: 75 4b jne 11cf <x86_emulate_memop+0x1142>
1184: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
118b: 00
118c: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1193: ba d5 08 00 00 mov $0x8d5,%edx
1198: 21 14 24 and %edx,(%rsp)
119b: 9c pushfq
119c: f7 d2 not %edx
119e: 21 14 24 and %edx,(%rsp)
11a1: 5a pop %rdx
11a2: 09 14 24 or %edx,(%rsp)
11a5: 9d popfq
11a6: ba d5 08 00 00 mov $0x8d5,%edx
11ab: f7 d2 not %edx
11ad: 21 94 24 48 01 00 00 and %edx,0x148(%rsp)
11b4: 18 84 24 f8 00 00 00 sbb %al,0xf8(%rsp)
11bb: 9c pushfq
11bc: 5a pop %rdx
11bd: 81 e2 d5 08 00 00 and $0x8d5,%edx
11c3: 09 94 24 48 01 00 00 or %edx,0x148(%rsp)
11ca: e9 be 18 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
11cf: 83 f8 04 cmp $0x4,%eax
11d2: 74 5e je 1232 <x86_emulate_memop+0x11a5>
11d4: 83 f8 08 cmp $0x8,%eax
11d7: 0f 84 a0 00 00 00 je 127d <x86_emulate_memop+0x11f0>
11dd: 83 f8 02 cmp $0x2,%eax
11e0: 0f 85 a7 18 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
11e6: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
11ed: 00
11ee: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
11f5: b9 d5 08 00 00 mov $0x8d5,%ecx
11fa: 21 0c 24 and %ecx,(%rsp)
11fd: 9c pushfq
11fe: f7 d1 not %ecx
1200: 21 0c 24 and %ecx,(%rsp)
1203: 59 pop %rcx
1204: 09 0c 24 or %ecx,(%rsp)
1207: 9d popfq
1208: b9 d5 08 00 00 mov $0x8d5,%ecx
120d: f7 d1 not %ecx
120f: 21 8c 24 48 01 00 00 and %ecx,0x148(%rsp)
1216: 66 19 84 24 f8 00 00 sbb %ax,0xf8(%rsp)
121d: 00
121e: 9c pushfq
121f: 59 pop %rcx
1220: 81 e1 d5 08 00 00 and $0x8d5,%ecx
1226: 09 8c 24 48 01 00 00 or %ecx,0x148(%rsp)
122d: e9 5b 18 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1232: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1239: 00
123a: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1241: bb d5 08 00 00 mov $0x8d5,%ebx
1246: 21 1c 24 and %ebx,(%rsp)
1249: 9c pushfq
124a: f7 d3 not %ebx
124c: 21 1c 24 and %ebx,(%rsp)
124f: 5b pop %rbx
1250: 09 1c 24 or %ebx,(%rsp)
1253: 9d popfq
1254: bb d5 08 00 00 mov $0x8d5,%ebx
1259: f7 d3 not %ebx
125b: 21 9c 24 48 01 00 00 and %ebx,0x148(%rsp)
1262: 19 84 24 f8 00 00 00 sbb %eax,0xf8(%rsp)
1269: 9c pushfq
126a: 5b pop %rbx
126b: 81 e3 d5 08 00 00 and $0x8d5,%ebx
1271: 09 9c 24 48 01 00 00 or %ebx,0x148(%rsp)
1278: e9 10 18 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
127d: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1284: 00
1285: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
128c: bd d5 08 00 00 mov $0x8d5,%ebp
1291: 21 2c 24 and %ebp,(%rsp)
1294: 9c pushfq
1295: f7 d5 not %ebp
1297: 21 2c 24 and %ebp,(%rsp)
129a: 5d pop %rbp
129b: 09 2c 24 or %ebp,(%rsp)
129e: 9d popfq
129f: bd d5 08 00 00 mov $0x8d5,%ebp
12a4: f7 d5 not %ebp
12a6: 21 ac 24 48 01 00 00 and %ebp,0x148(%rsp)
12ad: 48 19 84 24 f8 00 00 sbb %rax,0xf8(%rsp)
12b4: 00
12b5: 9c pushfq
12b6: 5d pop %rbp
12b7: 81 e5 d5 08 00 00 and $0x8d5,%ebp
12bd: 09 ac 24 48 01 00 00 or %ebp,0x148(%rsp)
12c4: e9 c4 17 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0x20 ... 0x25:
and: /* and */
emulate_2op_SrcV("and", src, dst, _eflags);
12c9: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
12d0: 83 f8 01 cmp $0x1,%eax
12d3: 75 57 jne 132c <x86_emulate_memop+0x129f>
12d5: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
12dc: 00
12dd: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
12e4: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
12ea: 44 21 04 24 and %r8d,(%rsp)
12ee: 9c pushfq
12ef: 41 f7 d0 not %r8d
12f2: 44 21 04 24 and %r8d,(%rsp)
12f6: 41 58 pop %r8
12f8: 44 09 04 24 or %r8d,(%rsp)
12fc: 9d popfq
12fd: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
1303: 41 f7 d0 not %r8d
1306: 44 21 84 24 48 01 00 and %r8d,0x148(%rsp)
130d: 00
130e: 20 84 24 f8 00 00 00 and %al,0xf8(%rsp)
1315: 9c pushfq
1316: 41 58 pop %r8
1318: 41 81 e0 d5 08 00 00 and $0x8d5,%r8d
131f: 44 09 84 24 48 01 00 or %r8d,0x148(%rsp)
1326: 00
1327: e9 61 17 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
132c: 83 f8 04 cmp $0x4,%eax
132f: 74 5e je 138f <x86_emulate_memop+0x1302>
1331: 83 f8 08 cmp $0x8,%eax
1334: 0f 84 a0 00 00 00 je 13da <x86_emulate_memop+0x134d>
133a: 83 f8 02 cmp $0x2,%eax
133d: 0f 85 4a 17 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
1343: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
134a: 00
134b: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1352: ba d5 08 00 00 mov $0x8d5,%edx
1357: 21 14 24 and %edx,(%rsp)
135a: 9c pushfq
135b: f7 d2 not %edx
135d: 21 14 24 and %edx,(%rsp)
1360: 5a pop %rdx
1361: 09 14 24 or %edx,(%rsp)
1364: 9d popfq
1365: ba d5 08 00 00 mov $0x8d5,%edx
136a: f7 d2 not %edx
136c: 21 94 24 48 01 00 00 and %edx,0x148(%rsp)
1373: 66 21 84 24 f8 00 00 and %ax,0xf8(%rsp)
137a: 00
137b: 9c pushfq
137c: 5a pop %rdx
137d: 81 e2 d5 08 00 00 and $0x8d5,%edx
1383: 09 94 24 48 01 00 00 or %edx,0x148(%rsp)
138a: e9 fe 16 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
138f: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1396: 00
1397: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
139e: b9 d5 08 00 00 mov $0x8d5,%ecx
13a3: 21 0c 24 and %ecx,(%rsp)
13a6: 9c pushfq
13a7: f7 d1 not %ecx
13a9: 21 0c 24 and %ecx,(%rsp)
13ac: 59 pop %rcx
13ad: 09 0c 24 or %ecx,(%rsp)
13b0: 9d popfq
13b1: b9 d5 08 00 00 mov $0x8d5,%ecx
13b6: f7 d1 not %ecx
13b8: 21 8c 24 48 01 00 00 and %ecx,0x148(%rsp)
13bf: 21 84 24 f8 00 00 00 and %eax,0xf8(%rsp)
13c6: 9c pushfq
13c7: 59 pop %rcx
13c8: 81 e1 d5 08 00 00 and $0x8d5,%ecx
13ce: 09 8c 24 48 01 00 00 or %ecx,0x148(%rsp)
13d5: e9 b3 16 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
13da: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
13e1: 00
13e2: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
13e9: bb d5 08 00 00 mov $0x8d5,%ebx
13ee: 21 1c 24 and %ebx,(%rsp)
13f1: 9c pushfq
13f2: f7 d3 not %ebx
13f4: 21 1c 24 and %ebx,(%rsp)
13f7: 5b pop %rbx
13f8: 09 1c 24 or %ebx,(%rsp)
13fb: 9d popfq
13fc: bb d5 08 00 00 mov $0x8d5,%ebx
1401: f7 d3 not %ebx
1403: 21 9c 24 48 01 00 00 and %ebx,0x148(%rsp)
140a: 48 21 84 24 f8 00 00 and %rax,0xf8(%rsp)
1411: 00
1412: 9c pushfq
1413: 5b pop %rbx
1414: 81 e3 d5 08 00 00 and $0x8d5,%ebx
141a: 09 9c 24 48 01 00 00 or %ebx,0x148(%rsp)
1421: e9 67 16 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0x28 ... 0x2d:
sub: /* sub */
emulate_2op_SrcV("sub", src, dst, _eflags);
1426: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
142d: 83 f8 01 cmp $0x1,%eax
1430: 75 4b jne 147d <x86_emulate_memop+0x13f0>
1432: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1439: 00
143a: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1441: bd d5 08 00 00 mov $0x8d5,%ebp
1446: 21 2c 24 and %ebp,(%rsp)
1449: 9c pushfq
144a: f7 d5 not %ebp
144c: 21 2c 24 and %ebp,(%rsp)
144f: 5d pop %rbp
1450: 09 2c 24 or %ebp,(%rsp)
1453: 9d popfq
1454: bd d5 08 00 00 mov $0x8d5,%ebp
1459: f7 d5 not %ebp
145b: 21 ac 24 48 01 00 00 and %ebp,0x148(%rsp)
1462: 28 84 24 f8 00 00 00 sub %al,0xf8(%rsp)
1469: 9c pushfq
146a: 5d pop %rbp
146b: 81 e5 d5 08 00 00 and $0x8d5,%ebp
1471: 09 ac 24 48 01 00 00 or %ebp,0x148(%rsp)
1478: e9 10 16 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
147d: 83 f8 04 cmp $0x4,%eax
1480: 74 6a je 14ec <x86_emulate_memop+0x145f>
1482: 83 f8 08 cmp $0x8,%eax
1485: 0f 84 ac 00 00 00 je 1537 <x86_emulate_memop+0x14aa>
148b: 83 f8 02 cmp $0x2,%eax
148e: 0f 85 f9 15 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
1494: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
149b: 00
149c: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
14a3: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
14a9: 44 21 04 24 and %r8d,(%rsp)
14ad: 9c pushfq
14ae: 41 f7 d0 not %r8d
14b1: 44 21 04 24 and %r8d,(%rsp)
14b5: 41 58 pop %r8
14b7: 44 09 04 24 or %r8d,(%rsp)
14bb: 9d popfq
14bc: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
14c2: 41 f7 d0 not %r8d
14c5: 44 21 84 24 48 01 00 and %r8d,0x148(%rsp)
14cc: 00
14cd: 66 29 84 24 f8 00 00 sub %ax,0xf8(%rsp)
14d4: 00
14d5: 9c pushfq
14d6: 41 58 pop %r8
14d8: 41 81 e0 d5 08 00 00 and $0x8d5,%r8d
14df: 44 09 84 24 48 01 00 or %r8d,0x148(%rsp)
14e6: 00
14e7: e9 a1 15 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
14ec: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
14f3: 00
14f4: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
14fb: ba d5 08 00 00 mov $0x8d5,%edx
1500: 21 14 24 and %edx,(%rsp)
1503: 9c pushfq
1504: f7 d2 not %edx
1506: 21 14 24 and %edx,(%rsp)
1509: 5a pop %rdx
150a: 09 14 24 or %edx,(%rsp)
150d: 9d popfq
150e: ba d5 08 00 00 mov $0x8d5,%edx
1513: f7 d2 not %edx
1515: 21 94 24 48 01 00 00 and %edx,0x148(%rsp)
151c: 29 84 24 f8 00 00 00 sub %eax,0xf8(%rsp)
1523: 9c pushfq
1524: 5a pop %rdx
1525: 81 e2 d5 08 00 00 and $0x8d5,%edx
152b: 09 94 24 48 01 00 00 or %edx,0x148(%rsp)
1532: e9 56 15 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1537: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
153e: 00
153f: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1546: b9 d5 08 00 00 mov $0x8d5,%ecx
154b: 21 0c 24 and %ecx,(%rsp)
154e: 9c pushfq
154f: f7 d1 not %ecx
1551: 21 0c 24 and %ecx,(%rsp)
1554: 59 pop %rcx
1555: 09 0c 24 or %ecx,(%rsp)
1558: 9d popfq
1559: b9 d5 08 00 00 mov $0x8d5,%ecx
155e: f7 d1 not %ecx
1560: 21 8c 24 48 01 00 00 and %ecx,0x148(%rsp)
1567: 48 29 84 24 f8 00 00 sub %rax,0xf8(%rsp)
156e: 00
156f: 9c pushfq
1570: 59 pop %rcx
1571: 81 e1 d5 08 00 00 and $0x8d5,%ecx
1577: 09 8c 24 48 01 00 00 or %ecx,0x148(%rsp)
157e: e9 0a 15 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0x30 ... 0x35:
xor: /* xor */
emulate_2op_SrcV("xor", src, dst, _eflags);
1583: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
158a: 83 f8 01 cmp $0x1,%eax
158d: 75 4b jne 15da <x86_emulate_memop+0x154d>
158f: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1596: 00
1597: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
159e: bb d5 08 00 00 mov $0x8d5,%ebx
15a3: 21 1c 24 and %ebx,(%rsp)
15a6: 9c pushfq
15a7: f7 d3 not %ebx
15a9: 21 1c 24 and %ebx,(%rsp)
15ac: 5b pop %rbx
15ad: 09 1c 24 or %ebx,(%rsp)
15b0: 9d popfq
15b1: bb d5 08 00 00 mov $0x8d5,%ebx
15b6: f7 d3 not %ebx
15b8: 21 9c 24 48 01 00 00 and %ebx,0x148(%rsp)
15bf: 30 84 24 f8 00 00 00 xor %al,0xf8(%rsp)
15c6: 9c pushfq
15c7: 5b pop %rbx
15c8: 81 e3 d5 08 00 00 and $0x8d5,%ebx
15ce: 09 9c 24 48 01 00 00 or %ebx,0x148(%rsp)
15d5: e9 b3 14 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
15da: 83 f8 04 cmp $0x4,%eax
15dd: 74 5e je 163d <x86_emulate_memop+0x15b0>
15df: 83 f8 08 cmp $0x8,%eax
15e2: 0f 84 ac 00 00 00 je 1694 <x86_emulate_memop+0x1607>
15e8: 83 f8 02 cmp $0x2,%eax
15eb: 0f 85 9c 14 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
15f1: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
15f8: 00
15f9: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1600: bd d5 08 00 00 mov $0x8d5,%ebp
1605: 21 2c 24 and %ebp,(%rsp)
1608: 9c pushfq
1609: f7 d5 not %ebp
160b: 21 2c 24 and %ebp,(%rsp)
160e: 5d pop %rbp
160f: 09 2c 24 or %ebp,(%rsp)
1612: 9d popfq
1613: bd d5 08 00 00 mov $0x8d5,%ebp
1618: f7 d5 not %ebp
161a: 21 ac 24 48 01 00 00 and %ebp,0x148(%rsp)
1621: 66 31 84 24 f8 00 00 xor %ax,0xf8(%rsp)
1628: 00
1629: 9c pushfq
162a: 5d pop %rbp
162b: 81 e5 d5 08 00 00 and $0x8d5,%ebp
1631: 09 ac 24 48 01 00 00 or %ebp,0x148(%rsp)
1638: e9 50 14 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
163d: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1644: 00
1645: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
164c: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
1652: 44 21 04 24 and %r8d,(%rsp)
1656: 9c pushfq
1657: 41 f7 d0 not %r8d
165a: 44 21 04 24 and %r8d,(%rsp)
165e: 41 58 pop %r8
1660: 44 09 04 24 or %r8d,(%rsp)
1664: 9d popfq
1665: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
166b: 41 f7 d0 not %r8d
166e: 44 21 84 24 48 01 00 and %r8d,0x148(%rsp)
1675: 00
1676: 31 84 24 f8 00 00 00 xor %eax,0xf8(%rsp)
167d: 9c pushfq
167e: 41 58 pop %r8
1680: 41 81 e0 d5 08 00 00 and $0x8d5,%r8d
1687: 44 09 84 24 48 01 00 or %r8d,0x148(%rsp)
168e: 00
168f: e9 f9 13 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1694: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
169b: 00
169c: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
16a3: ba d5 08 00 00 mov $0x8d5,%edx
16a8: 21 14 24 and %edx,(%rsp)
16ab: 9c pushfq
16ac: f7 d2 not %edx
16ae: 21 14 24 and %edx,(%rsp)
16b1: 5a pop %rdx
16b2: 09 14 24 or %edx,(%rsp)
16b5: 9d popfq
16b6: ba d5 08 00 00 mov $0x8d5,%edx
16bb: f7 d2 not %edx
16bd: 21 94 24 48 01 00 00 and %edx,0x148(%rsp)
16c4: 48 31 84 24 f8 00 00 xor %rax,0xf8(%rsp)
16cb: 00
16cc: 9c pushfq
16cd: 5a pop %rdx
16ce: 81 e2 d5 08 00 00 and $0x8d5,%edx
16d4: 09 94 24 48 01 00 00 or %edx,0x148(%rsp)
16db: e9 ad 13 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0x38 ... 0x3d:
cmp: /* cmp */
emulate_2op_SrcV("cmp", src, dst, _eflags);
16e0: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
16e7: 83 f8 01 cmp $0x1,%eax
16ea: 75 4b jne 1737 <x86_emulate_memop+0x16aa>
16ec: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
16f3: 00
16f4: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
16fb: b9 d5 08 00 00 mov $0x8d5,%ecx
1700: 21 0c 24 and %ecx,(%rsp)
1703: 9c pushfq
1704: f7 d1 not %ecx
1706: 21 0c 24 and %ecx,(%rsp)
1709: 59 pop %rcx
170a: 09 0c 24 or %ecx,(%rsp)
170d: 9d popfq
170e: b9 d5 08 00 00 mov $0x8d5,%ecx
1713: f7 d1 not %ecx
1715: 21 8c 24 48 01 00 00 and %ecx,0x148(%rsp)
171c: 38 84 24 f8 00 00 00 cmp %al,0xf8(%rsp)
1723: 9c pushfq
1724: 59 pop %rcx
1725: 81 e1 d5 08 00 00 and $0x8d5,%ecx
172b: 09 8c 24 48 01 00 00 or %ecx,0x148(%rsp)
1732: e9 56 13 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1737: 83 f8 04 cmp $0x4,%eax
173a: 74 5e je 179a <x86_emulate_memop+0x170d>
173c: 83 f8 08 cmp $0x8,%eax
173f: 0f 84 a0 00 00 00 je 17e5 <x86_emulate_memop+0x1758>
1745: 83 f8 02 cmp $0x2,%eax
1748: 0f 85 3f 13 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
174e: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1755: 00
1756: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
175d: bb d5 08 00 00 mov $0x8d5,%ebx
1762: 21 1c 24 and %ebx,(%rsp)
1765: 9c pushfq
1766: f7 d3 not %ebx
1768: 21 1c 24 and %ebx,(%rsp)
176b: 5b pop %rbx
176c: 09 1c 24 or %ebx,(%rsp)
176f: 9d popfq
1770: bb d5 08 00 00 mov $0x8d5,%ebx
1775: f7 d3 not %ebx
1777: 21 9c 24 48 01 00 00 and %ebx,0x148(%rsp)
177e: 66 39 84 24 f8 00 00 cmp %ax,0xf8(%rsp)
1785: 00
1786: 9c pushfq
1787: 5b pop %rbx
1788: 81 e3 d5 08 00 00 and $0x8d5,%ebx
178e: 09 9c 24 48 01 00 00 or %ebx,0x148(%rsp)
1795: e9 f3 12 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
179a: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
17a1: 00
17a2: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
17a9: bd d5 08 00 00 mov $0x8d5,%ebp
17ae: 21 2c 24 and %ebp,(%rsp)
17b1: 9c pushfq
17b2: f7 d5 not %ebp
17b4: 21 2c 24 and %ebp,(%rsp)
17b7: 5d pop %rbp
17b8: 09 2c 24 or %ebp,(%rsp)
17bb: 9d popfq
17bc: bd d5 08 00 00 mov $0x8d5,%ebp
17c1: f7 d5 not %ebp
17c3: 21 ac 24 48 01 00 00 and %ebp,0x148(%rsp)
17ca: 39 84 24 f8 00 00 00 cmp %eax,0xf8(%rsp)
17d1: 9c pushfq
17d2: 5d pop %rbp
17d3: 81 e5 d5 08 00 00 and $0x8d5,%ebp
17d9: 09 ac 24 48 01 00 00 or %ebp,0x148(%rsp)
17e0: e9 a8 12 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
17e5: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
17ec: 00
17ed: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
17f4: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
17fa: 44 21 04 24 and %r8d,(%rsp)
17fe: 9c pushfq
17ff: 41 f7 d0 not %r8d
1802: 44 21 04 24 and %r8d,(%rsp)
1806: 41 58 pop %r8
1808: 44 09 04 24 or %r8d,(%rsp)
180c: 9d popfq
180d: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
1813: 41 f7 d0 not %r8d
1816: 44 21 84 24 48 01 00 and %r8d,0x148(%rsp)
181d: 00
181e: 48 39 84 24 f8 00 00 cmp %rax,0xf8(%rsp)
1825: 00
1826: 9c pushfq
1827: 41 58 pop %r8
1829: 41 81 e0 d5 08 00 00 and $0x8d5,%r8d
1830: 44 09 84 24 48 01 00 or %r8d,0x148(%rsp)
1837: 00
1838: e9 50 12 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0x63: /* movsxd */
if (mode != X86EMUL_MODE_PROT64)
183d: 83 7c 24 54 08 cmpl $0x8,0x54(%rsp)
1842: 0f 85 09 23 00 00 jne 3b51 <x86_emulate_memop+0x3ac4>
goto cannot_emulate;
dst.val = (s32) src.val;
1848: 48 63 84 24 18 01 00 movslq 0x118(%rsp),%rax
184f: 00
1850: 48 89 84 24 f8 00 00 mov %rax,0xf8(%rsp)
1857: 00
1858: e9 30 12 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0x80 ... 0x83: /* Grp1 */
switch (modrm_reg) {
185d: 80 7c 24 20 07 cmpb $0x7,0x20(%rsp)
1862: 0f 87 25 12 00 00 ja 2a8d <x86_emulate_memop+0x2a00>
1868: 0f b6 44 24 20 movzbl 0x20(%rsp),%eax
186d: ff 24 c5 00 00 00 00 jmpq *0x0(,%rax,8)
1870: R_X86_64_32S .rodata+0x40
case 0:
goto add;
case 1:
goto or;
case 2:
goto adc;
case 3:
goto sbb;
case 4:
goto and;
case 5:
goto sub;
case 6:
goto xor;
case 7:
goto cmp;
}
break;
case 0x84 ... 0x85:
test: /* test */
emulate_2op_SrcV("test", src, dst, _eflags);
1874: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
187b: 83 f8 01 cmp $0x1,%eax
187e: 75 4b jne 18cb <x86_emulate_memop+0x183e>
1880: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1887: 00
1888: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
188f: ba d5 08 00 00 mov $0x8d5,%edx
1894: 21 14 24 and %edx,(%rsp)
1897: 9c pushfq
1898: f7 d2 not %edx
189a: 21 14 24 and %edx,(%rsp)
189d: 5a pop %rdx
189e: 09 14 24 or %edx,(%rsp)
18a1: 9d popfq
18a2: ba d5 08 00 00 mov $0x8d5,%edx
18a7: f7 d2 not %edx
18a9: 21 94 24 48 01 00 00 and %edx,0x148(%rsp)
18b0: 84 84 24 f8 00 00 00 test %al,0xf8(%rsp)
18b7: 9c pushfq
18b8: 5a pop %rdx
18b9: 81 e2 d5 08 00 00 and $0x8d5,%edx
18bf: 09 94 24 48 01 00 00 or %edx,0x148(%rsp)
18c6: e9 c2 11 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
18cb: 83 f8 04 cmp $0x4,%eax
18ce: 74 5e je 192e <x86_emulate_memop+0x18a1>
18d0: 83 f8 08 cmp $0x8,%eax
18d3: 0f 84 a0 00 00 00 je 1979 <x86_emulate_memop+0x18ec>
18d9: 83 f8 02 cmp $0x2,%eax
18dc: 0f 85 ab 11 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
18e2: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
18e9: 00
18ea: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
18f1: b9 d5 08 00 00 mov $0x8d5,%ecx
18f6: 21 0c 24 and %ecx,(%rsp)
18f9: 9c pushfq
18fa: f7 d1 not %ecx
18fc: 21 0c 24 and %ecx,(%rsp)
18ff: 59 pop %rcx
1900: 09 0c 24 or %ecx,(%rsp)
1903: 9d popfq
1904: b9 d5 08 00 00 mov $0x8d5,%ecx
1909: f7 d1 not %ecx
190b: 21 8c 24 48 01 00 00 and %ecx,0x148(%rsp)
1912: 66 85 84 24 f8 00 00 test %ax,0xf8(%rsp)
1919: 00
191a: 9c pushfq
191b: 59 pop %rcx
191c: 81 e1 d5 08 00 00 and $0x8d5,%ecx
1922: 09 8c 24 48 01 00 00 or %ecx,0x148(%rsp)
1929: e9 5f 11 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
192e: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1935: 00
1936: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
193d: bb d5 08 00 00 mov $0x8d5,%ebx
1942: 21 1c 24 and %ebx,(%rsp)
1945: 9c pushfq
1946: f7 d3 not %ebx
1948: 21 1c 24 and %ebx,(%rsp)
194b: 5b pop %rbx
194c: 09 1c 24 or %ebx,(%rsp)
194f: 9d popfq
1950: bb d5 08 00 00 mov $0x8d5,%ebx
1955: f7 d3 not %ebx
1957: 21 9c 24 48 01 00 00 and %ebx,0x148(%rsp)
195e: 85 84 24 f8 00 00 00 test %eax,0xf8(%rsp)
1965: 9c pushfq
1966: 5b pop %rbx
1967: 81 e3 d5 08 00 00 and $0x8d5,%ebx
196d: 09 9c 24 48 01 00 00 or %ebx,0x148(%rsp)
1974: e9 14 11 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1979: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1980: 00
1981: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1988: bd d5 08 00 00 mov $0x8d5,%ebp
198d: 21 2c 24 and %ebp,(%rsp)
1990: 9c pushfq
1991: f7 d5 not %ebp
1993: 21 2c 24 and %ebp,(%rsp)
1996: 5d pop %rbp
1997: 09 2c 24 or %ebp,(%rsp)
199a: 9d popfq
199b: bd d5 08 00 00 mov $0x8d5,%ebp
19a0: f7 d5 not %ebp
19a2: 21 ac 24 48 01 00 00 and %ebp,0x148(%rsp)
19a9: 48 85 84 24 f8 00 00 test %rax,0xf8(%rsp)
19b0: 00
19b1: 9c pushfq
19b2: 5d pop %rbp
19b3: 81 e5 d5 08 00 00 and $0x8d5,%ebp
19b9: 09 ac 24 48 01 00 00 or %ebp,0x148(%rsp)
19c0: e9 c8 10 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0x86 ... 0x87: /* xchg */
/* Write back the register source. */
switch (dst.bytes) {
19c5: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
19cc: 83 f8 02 cmp $0x2,%eax
19cf: 74 20 je 19f1 <x86_emulate_memop+0x1964>
19d1: 77 06 ja 19d9 <x86_emulate_memop+0x194c>
19d3: ff c8 dec %eax
19d5: 75 46 jne 1a1d <x86_emulate_memop+0x1990>
19d7: eb 0c jmp 19e5 <x86_emulate_memop+0x1958>
19d9: 83 f8 04 cmp $0x4,%eax
19dc: 74 20 je 19fe <x86_emulate_memop+0x1971>
19de: 83 f8 08 cmp $0x8,%eax
19e1: 75 3a jne 1a1d <x86_emulate_memop+0x1990>
19e3: eb 25 jmp 1a0a <x86_emulate_memop+0x197d>
case 1:
*(u8 *) src.ptr = (u8) dst.val;
19e5: 48 8b 84 24 28 01 00 mov 0x128(%rsp),%rax
19ec: 00
19ed: 88 10 mov %dl,(%rax)
19ef: eb 2c jmp 1a1d <x86_emulate_memop+0x1990>
break;
case 2:
*(u16 *) src.ptr = (u16) dst.val;
19f1: 48 8b 84 24 28 01 00 mov 0x128(%rsp),%rax
19f8: 00
19f9: 66 89 10 mov %dx,(%rax)
19fc: eb 1f jmp 1a1d <x86_emulate_memop+0x1990>
break;
case 4:
*src.ptr = (u32) dst.val;
19fe: 48 8b 84 24 28 01 00 mov 0x128(%rsp),%rax
1a05: 00
1a06: 89 d2 mov %edx,%edx
1a08: eb 10 jmp 1a1a <x86_emulate_memop+0x198d>
break; /* 64b reg: zero-extend */
case 8:
*src.ptr = dst.val;
1a0a: 48 8b 94 24 f8 00 00 mov 0xf8(%rsp),%rdx
1a11: 00
1a12: 48 8b 84 24 28 01 00 mov 0x128(%rsp),%rax
1a19: 00
1a1a: 48 89 10 mov %rdx,(%rax)
break;
}
/*
* Write back the memory destination with implicit LOCK
* prefix.
*/
dst.val = src.val;
1a1d: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1a24: 00
1a25: c7 44 24 4c 01 00 00 movl $0x1,0x4c(%rsp)
1a2c: 00
1a2d: 48 89 84 24 f8 00 00 mov %rax,0xf8(%rsp)
1a34: 00
1a35: e9 53 10 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
lock_prefix = 1;
break;
case 0xa0 ... 0xa1: /* mov */
dst.ptr = (unsigned long *)&_regs[VCPU_REGS_RAX];
1a3a: 48 8d 44 24 70 lea 0x70(%rsp),%rax
1a3f: 48 89 84 24 08 01 00 mov %rax,0x108(%rsp)
1a46: 00
dst.val = src.val;
1a47: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1a4e: 00
1a4f: 48 89 84 24 f8 00 00 mov %rax,0xf8(%rsp)
1a56: 00
_eip += ad_bytes; /* skip src displacement */
1a57: 8b 44 24 48 mov 0x48(%rsp),%eax
1a5b: 48 01 84 24 50 01 00 add %rax,0x150(%rsp)
1a62: 00
1a63: e9 25 10 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0xa2 ... 0xa3: /* mov */
dst.val = (unsigned long)_regs[VCPU_REGS_RAX];
1a68: 48 8b 44 24 70 mov 0x70(%rsp),%rax
1a6d: 48 89 84 24 f8 00 00 mov %rax,0xf8(%rsp)
1a74: 00
_eip += ad_bytes; /* skip dst displacement */
1a75: 8b 44 24 48 mov 0x48(%rsp),%eax
1a79: 48 01 84 24 50 01 00 add %rax,0x150(%rsp)
1a80: 00
1a81: e9 07 10 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0x88 ... 0x8b: /* mov */
case 0xc6 ... 0xc7: /* mov (sole member of Grp11) */
dst.val = src.val;
1a86: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
1a8d: 00
1a8e: 48 89 84 24 f8 00 00 mov %rax,0xf8(%rsp)
1a95: 00
1a96: e9 f2 0f 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0x8f: /* pop (sole member of Grp1a) */
/* 64-bit mode: POP always pops a 64-bit operand. */
if (mode == X86EMUL_MODE_PROT64)
dst.bytes = 8;
1a9b: 83 7c 24 54 08 cmpl $0x8,0x54(%rsp)
1aa0: ba 08 00 00 00 mov $0x8,%edx
if ((rc = ops->read_std(register_address(ctxt->ss_base,
1aa5: 48 8b 04 24 mov (%rsp),%rax
1aa9: 0f 45 94 24 f4 00 00 cmovne 0xf4(%rsp),%edx
1ab0: 00
1ab1: 83 7c 24 48 08 cmpl $0x8,0x48(%rsp)
1ab6: 49 8b 7d 38 mov 0x38(%r13),%rdi
1aba: 89 94 24 f4 00 00 00 mov %edx,0xf4(%rsp)
1ac1: 4c 8b 00 mov (%rax),%r8
1ac4: 75 0a jne 1ad0 <x86_emulate_memop+0x1a43>
1ac6: 48 8b 84 24 90 00 00 mov 0x90(%rsp),%rax
1acd: 00
1ace: eb 1a jmp 1aea <x86_emulate_memop+0x1a5d>
1ad0: 8b 4c 24 48 mov 0x48(%rsp),%ecx
1ad4: b8 01 00 00 00 mov $0x1,%eax
1ad9: c1 e1 03 shl $0x3,%ecx
1adc: 48 d3 e0 shl %cl,%rax
1adf: 48 ff c8 dec %rax
1ae2: 48 23 84 24 90 00 00 and 0x90(%rsp),%rax
1ae9: 00
1aea: 48 8d b4 24 f8 00 00 lea 0xf8(%rsp),%rsi
1af1: 00
1af2: 48 8d 3c 38 lea (%rax,%rdi,1),%rdi
1af6: 4c 89 e9 mov %r13,%rcx
1af9: 41 ff d0 callq *%r8
1afc: 85 c0 test %eax,%eax
1afe: 41 89 c7 mov %eax,%r15d
1b01: 0f 85 9d 10 00 00 jne 2ba4 <x86_emulate_memop+0x2b17>
_regs[VCPU_REGS_RSP]),
&dst.val, dst.bytes, ctxt)) != 0)
goto done;
register_address_increment(_regs[VCPU_REGS_RSP], dst.bytes);
1b07: 83 7c 24 48 08 cmpl $0x8,0x48(%rsp)
1b0c: 48 63 bc 24 f4 00 00 movslq 0xf4(%rsp),%rdi
1b13: 00
1b14: 75 0d jne 1b23 <x86_emulate_memop+0x1a96>
1b16: 48 01 bc 24 90 00 00 add %rdi,0x90(%rsp)
1b1d: 00
1b1e: e9 6a 0f 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1b23: 8b 4c 24 48 mov 0x48(%rsp),%ecx
1b27: 48 8b b4 24 90 00 00 mov 0x90(%rsp),%rsi
1b2e: 00
1b2f: b8 01 00 00 00 mov $0x1,%eax
1b34: c1 e1 03 shl $0x3,%ecx
1b37: 48 d3 e0 shl %cl,%rax
1b3a: 48 8d 0c 37 lea (%rdi,%rsi,1),%rcx
1b3e: 48 8d 50 ff lea 0xffffffffffffffff(%rax),%rdx
1b42: 48 f7 d8 neg %rax
1b45: 48 21 f0 and %rsi,%rax
1b48: 48 21 ca and %rcx,%rdx
1b4b: 48 09 c2 or %rax,%rdx
1b4e: 48 89 94 24 90 00 00 mov %rdx,0x90(%rsp)
1b55: 00
1b56: e9 32 0f 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0xc0 ... 0xc1:
grp2: /* Grp2 */
switch (modrm_reg) {
1b5b: 80 7c 24 20 07 cmpb $0x7,0x20(%rsp)
1b60: 0f 87 27 0f 00 00 ja 2a8d <x86_emulate_memop+0x2a00>
1b66: 0f b6 44 24 20 movzbl 0x20(%rsp),%eax
1b6b: 44 8b 8c 24 f4 00 00 mov 0xf4(%rsp),%r9d
1b72: 00
1b73: ff 24 c5 00 00 00 00 jmpq *0x0(,%rax,8)
1b76: R_X86_64_32S .rodata+0x80
case 0: /* rol */
emulate_2op_SrcB("rol", src, dst, _eflags);
1b7a: 41 83 f9 01 cmp $0x1,%r9d
1b7e: 75 4a jne 1bca <x86_emulate_memop+0x1b3d>
1b80: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1b87: 00
1b88: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1b8f: b8 d5 08 00 00 mov $0x8d5,%eax
1b94: 21 04 24 and %eax,(%rsp)
1b97: 9c pushfq
1b98: f7 d0 not %eax
1b9a: 21 04 24 and %eax,(%rsp)
1b9d: 58 pop %rax
1b9e: 09 04 24 or %eax,(%rsp)
1ba1: 9d popfq
1ba2: b8 d5 08 00 00 mov $0x8d5,%eax
1ba7: f7 d0 not %eax
1ba9: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1bb0: d2 84 24 f8 00 00 00 rolb %cl,0xf8(%rsp)
1bb7: 9c pushfq
1bb8: 58 pop %rax
1bb9: 25 d5 08 00 00 and $0x8d5,%eax
1bbe: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1bc5: e9 c3 0e 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1bca: 41 83 f9 04 cmp $0x4,%r9d
1bce: 74 5f je 1c2f <x86_emulate_memop+0x1ba2>
1bd0: 41 83 f9 08 cmp $0x8,%r9d
1bd4: 0f 84 9f 00 00 00 je 1c79 <x86_emulate_memop+0x1bec>
1bda: 41 83 f9 02 cmp $0x2,%r9d
1bde: 0f 85 a9 0e 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
1be4: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1beb: 00
1bec: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1bf3: b8 d5 08 00 00 mov $0x8d5,%eax
1bf8: 21 04 24 and %eax,(%rsp)
1bfb: 9c pushfq
1bfc: f7 d0 not %eax
1bfe: 21 04 24 and %eax,(%rsp)
1c01: 58 pop %rax
1c02: 09 04 24 or %eax,(%rsp)
1c05: 9d popfq
1c06: b8 d5 08 00 00 mov $0x8d5,%eax
1c0b: f7 d0 not %eax
1c0d: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1c14: 66 d3 84 24 f8 00 00 rolw %cl,0xf8(%rsp)
1c1b: 00
1c1c: 9c pushfq
1c1d: 58 pop %rax
1c1e: 25 d5 08 00 00 and $0x8d5,%eax
1c23: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1c2a: e9 5e 0e 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1c2f: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1c36: 00
1c37: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1c3e: b8 d5 08 00 00 mov $0x8d5,%eax
1c43: 21 04 24 and %eax,(%rsp)
1c46: 9c pushfq
1c47: f7 d0 not %eax
1c49: 21 04 24 and %eax,(%rsp)
1c4c: 58 pop %rax
1c4d: 09 04 24 or %eax,(%rsp)
1c50: 9d popfq
1c51: b8 d5 08 00 00 mov $0x8d5,%eax
1c56: f7 d0 not %eax
1c58: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1c5f: d3 84 24 f8 00 00 00 roll %cl,0xf8(%rsp)
1c66: 9c pushfq
1c67: 58 pop %rax
1c68: 25 d5 08 00 00 and $0x8d5,%eax
1c6d: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1c74: e9 14 0e 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1c79: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1c80: 00
1c81: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1c88: b8 d5 08 00 00 mov $0x8d5,%eax
1c8d: 21 04 24 and %eax,(%rsp)
1c90: 9c pushfq
1c91: f7 d0 not %eax
1c93: 21 04 24 and %eax,(%rsp)
1c96: 58 pop %rax
1c97: 09 04 24 or %eax,(%rsp)
1c9a: 9d popfq
1c9b: b8 d5 08 00 00 mov $0x8d5,%eax
1ca0: f7 d0 not %eax
1ca2: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1ca9: 48 d3 84 24 f8 00 00 rolq %cl,0xf8(%rsp)
1cb0: 00
1cb1: 9c pushfq
1cb2: 58 pop %rax
1cb3: 25 d5 08 00 00 and $0x8d5,%eax
1cb8: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1cbf: e9 c9 0d 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 1: /* ror */
emulate_2op_SrcB("ror", src, dst, _eflags);
1cc4: 41 83 f9 01 cmp $0x1,%r9d
1cc8: 75 4a jne 1d14 <x86_emulate_memop+0x1c87>
1cca: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1cd1: 00
1cd2: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1cd9: b8 d5 08 00 00 mov $0x8d5,%eax
1cde: 21 04 24 and %eax,(%rsp)
1ce1: 9c pushfq
1ce2: f7 d0 not %eax
1ce4: 21 04 24 and %eax,(%rsp)
1ce7: 58 pop %rax
1ce8: 09 04 24 or %eax,(%rsp)
1ceb: 9d popfq
1cec: b8 d5 08 00 00 mov $0x8d5,%eax
1cf1: f7 d0 not %eax
1cf3: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1cfa: d2 8c 24 f8 00 00 00 rorb %cl,0xf8(%rsp)
1d01: 9c pushfq
1d02: 58 pop %rax
1d03: 25 d5 08 00 00 and $0x8d5,%eax
1d08: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1d0f: e9 79 0d 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1d14: 41 83 f9 04 cmp $0x4,%r9d
1d18: 74 5f je 1d79 <x86_emulate_memop+0x1cec>
1d1a: 41 83 f9 08 cmp $0x8,%r9d
1d1e: 0f 84 9f 00 00 00 je 1dc3 <x86_emulate_memop+0x1d36>
1d24: 41 83 f9 02 cmp $0x2,%r9d
1d28: 0f 85 5f 0d 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
1d2e: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1d35: 00
1d36: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1d3d: b8 d5 08 00 00 mov $0x8d5,%eax
1d42: 21 04 24 and %eax,(%rsp)
1d45: 9c pushfq
1d46: f7 d0 not %eax
1d48: 21 04 24 and %eax,(%rsp)
1d4b: 58 pop %rax
1d4c: 09 04 24 or %eax,(%rsp)
1d4f: 9d popfq
1d50: b8 d5 08 00 00 mov $0x8d5,%eax
1d55: f7 d0 not %eax
1d57: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1d5e: 66 d3 8c 24 f8 00 00 rorw %cl,0xf8(%rsp)
1d65: 00
1d66: 9c pushfq
1d67: 58 pop %rax
1d68: 25 d5 08 00 00 and $0x8d5,%eax
1d6d: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1d74: e9 14 0d 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1d79: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1d80: 00
1d81: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1d88: b8 d5 08 00 00 mov $0x8d5,%eax
1d8d: 21 04 24 and %eax,(%rsp)
1d90: 9c pushfq
1d91: f7 d0 not %eax
1d93: 21 04 24 and %eax,(%rsp)
1d96: 58 pop %rax
1d97: 09 04 24 or %eax,(%rsp)
1d9a: 9d popfq
1d9b: b8 d5 08 00 00 mov $0x8d5,%eax
1da0: f7 d0 not %eax
1da2: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1da9: d3 8c 24 f8 00 00 00 rorl %cl,0xf8(%rsp)
1db0: 9c pushfq
1db1: 58 pop %rax
1db2: 25 d5 08 00 00 and $0x8d5,%eax
1db7: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1dbe: e9 ca 0c 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1dc3: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1dca: 00
1dcb: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1dd2: b8 d5 08 00 00 mov $0x8d5,%eax
1dd7: 21 04 24 and %eax,(%rsp)
1dda: 9c pushfq
1ddb: f7 d0 not %eax
1ddd: 21 04 24 and %eax,(%rsp)
1de0: 58 pop %rax
1de1: 09 04 24 or %eax,(%rsp)
1de4: 9d popfq
1de5: b8 d5 08 00 00 mov $0x8d5,%eax
1dea: f7 d0 not %eax
1dec: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1df3: 48 d3 8c 24 f8 00 00 rorq %cl,0xf8(%rsp)
1dfa: 00
1dfb: 9c pushfq
1dfc: 58 pop %rax
1dfd: 25 d5 08 00 00 and $0x8d5,%eax
1e02: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1e09: e9 7f 0c 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 2: /* rcl */
emulate_2op_SrcB("rcl", src, dst, _eflags);
1e0e: 41 83 f9 01 cmp $0x1,%r9d
1e12: 75 4a jne 1e5e <x86_emulate_memop+0x1dd1>
1e14: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1e1b: 00
1e1c: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1e23: b8 d5 08 00 00 mov $0x8d5,%eax
1e28: 21 04 24 and %eax,(%rsp)
1e2b: 9c pushfq
1e2c: f7 d0 not %eax
1e2e: 21 04 24 and %eax,(%rsp)
1e31: 58 pop %rax
1e32: 09 04 24 or %eax,(%rsp)
1e35: 9d popfq
1e36: b8 d5 08 00 00 mov $0x8d5,%eax
1e3b: f7 d0 not %eax
1e3d: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1e44: d2 94 24 f8 00 00 00 rclb %cl,0xf8(%rsp)
1e4b: 9c pushfq
1e4c: 58 pop %rax
1e4d: 25 d5 08 00 00 and $0x8d5,%eax
1e52: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1e59: e9 2f 0c 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1e5e: 41 83 f9 04 cmp $0x4,%r9d
1e62: 74 5f je 1ec3 <x86_emulate_memop+0x1e36>
1e64: 41 83 f9 08 cmp $0x8,%r9d
1e68: 0f 84 9f 00 00 00 je 1f0d <x86_emulate_memop+0x1e80>
1e6e: 41 83 f9 02 cmp $0x2,%r9d
1e72: 0f 85 15 0c 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
1e78: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1e7f: 00
1e80: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1e87: b8 d5 08 00 00 mov $0x8d5,%eax
1e8c: 21 04 24 and %eax,(%rsp)
1e8f: 9c pushfq
1e90: f7 d0 not %eax
1e92: 21 04 24 and %eax,(%rsp)
1e95: 58 pop %rax
1e96: 09 04 24 or %eax,(%rsp)
1e99: 9d popfq
1e9a: b8 d5 08 00 00 mov $0x8d5,%eax
1e9f: f7 d0 not %eax
1ea1: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1ea8: 66 d3 94 24 f8 00 00 rclw %cl,0xf8(%rsp)
1eaf: 00
1eb0: 9c pushfq
1eb1: 58 pop %rax
1eb2: 25 d5 08 00 00 and $0x8d5,%eax
1eb7: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1ebe: e9 ca 0b 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1ec3: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1eca: 00
1ecb: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1ed2: b8 d5 08 00 00 mov $0x8d5,%eax
1ed7: 21 04 24 and %eax,(%rsp)
1eda: 9c pushfq
1edb: f7 d0 not %eax
1edd: 21 04 24 and %eax,(%rsp)
1ee0: 58 pop %rax
1ee1: 09 04 24 or %eax,(%rsp)
1ee4: 9d popfq
1ee5: b8 d5 08 00 00 mov $0x8d5,%eax
1eea: f7 d0 not %eax
1eec: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1ef3: d3 94 24 f8 00 00 00 rcll %cl,0xf8(%rsp)
1efa: 9c pushfq
1efb: 58 pop %rax
1efc: 25 d5 08 00 00 and $0x8d5,%eax
1f01: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1f08: e9 80 0b 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1f0d: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1f14: 00
1f15: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1f1c: b8 d5 08 00 00 mov $0x8d5,%eax
1f21: 21 04 24 and %eax,(%rsp)
1f24: 9c pushfq
1f25: f7 d0 not %eax
1f27: 21 04 24 and %eax,(%rsp)
1f2a: 58 pop %rax
1f2b: 09 04 24 or %eax,(%rsp)
1f2e: 9d popfq
1f2f: b8 d5 08 00 00 mov $0x8d5,%eax
1f34: f7 d0 not %eax
1f36: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1f3d: 48 d3 94 24 f8 00 00 rclq %cl,0xf8(%rsp)
1f44: 00
1f45: 9c pushfq
1f46: 58 pop %rax
1f47: 25 d5 08 00 00 and $0x8d5,%eax
1f4c: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1f53: e9 35 0b 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 3: /* rcr */
emulate_2op_SrcB("rcr", src, dst, _eflags);
1f58: 41 83 f9 01 cmp $0x1,%r9d
1f5c: 75 4a jne 1fa8 <x86_emulate_memop+0x1f1b>
1f5e: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1f65: 00
1f66: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1f6d: b8 d5 08 00 00 mov $0x8d5,%eax
1f72: 21 04 24 and %eax,(%rsp)
1f75: 9c pushfq
1f76: f7 d0 not %eax
1f78: 21 04 24 and %eax,(%rsp)
1f7b: 58 pop %rax
1f7c: 09 04 24 or %eax,(%rsp)
1f7f: 9d popfq
1f80: b8 d5 08 00 00 mov $0x8d5,%eax
1f85: f7 d0 not %eax
1f87: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1f8e: d2 9c 24 f8 00 00 00 rcrb %cl,0xf8(%rsp)
1f95: 9c pushfq
1f96: 58 pop %rax
1f97: 25 d5 08 00 00 and $0x8d5,%eax
1f9c: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
1fa3: e9 e5 0a 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
1fa8: 41 83 f9 04 cmp $0x4,%r9d
1fac: 74 5f je 200d <x86_emulate_memop+0x1f80>
1fae: 41 83 f9 08 cmp $0x8,%r9d
1fb2: 0f 84 9f 00 00 00 je 2057 <x86_emulate_memop+0x1fca>
1fb8: 41 83 f9 02 cmp $0x2,%r9d
1fbc: 0f 85 cb 0a 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
1fc2: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
1fc9: 00
1fca: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
1fd1: b8 d5 08 00 00 mov $0x8d5,%eax
1fd6: 21 04 24 and %eax,(%rsp)
1fd9: 9c pushfq
1fda: f7 d0 not %eax
1fdc: 21 04 24 and %eax,(%rsp)
1fdf: 58 pop %rax
1fe0: 09 04 24 or %eax,(%rsp)
1fe3: 9d popfq
1fe4: b8 d5 08 00 00 mov $0x8d5,%eax
1fe9: f7 d0 not %eax
1feb: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
1ff2: 66 d3 9c 24 f8 00 00 rcrw %cl,0xf8(%rsp)
1ff9: 00
1ffa: 9c pushfq
1ffb: 58 pop %rax
1ffc: 25 d5 08 00 00 and $0x8d5,%eax
2001: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
2008: e9 80 0a 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
200d: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
2014: 00
2015: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
201c: b8 d5 08 00 00 mov $0x8d5,%eax
2021: 21 04 24 and %eax,(%rsp)
2024: 9c pushfq
2025: f7 d0 not %eax
2027: 21 04 24 and %eax,(%rsp)
202a: 58 pop %rax
202b: 09 04 24 or %eax,(%rsp)
202e: 9d popfq
202f: b8 d5 08 00 00 mov $0x8d5,%eax
2034: f7 d0 not %eax
2036: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
203d: d3 9c 24 f8 00 00 00 rcrl %cl,0xf8(%rsp)
2044: 9c pushfq
2045: 58 pop %rax
2046: 25 d5 08 00 00 and $0x8d5,%eax
204b: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
2052: e9 36 0a 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
2057: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
205e: 00
205f: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
2066: b8 d5 08 00 00 mov $0x8d5,%eax
206b: 21 04 24 and %eax,(%rsp)
206e: 9c pushfq
206f: f7 d0 not %eax
2071: 21 04 24 and %eax,(%rsp)
2074: 58 pop %rax
2075: 09 04 24 or %eax,(%rsp)
2078: 9d popfq
2079: b8 d5 08 00 00 mov $0x8d5,%eax
207e: f7 d0 not %eax
2080: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
2087: 48 d3 9c 24 f8 00 00 rcrq %cl,0xf8(%rsp)
208e: 00
208f: 9c pushfq
2090: 58 pop %rax
2091: 25 d5 08 00 00 and $0x8d5,%eax
2096: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
209d: e9 eb 09 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 4: /* sal/shl */
case 6: /* sal/shl */
emulate_2op_SrcB("sal", src, dst, _eflags);
20a2: 41 83 f9 01 cmp $0x1,%r9d
20a6: 75 4a jne 20f2 <x86_emulate_memop+0x2065>
20a8: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
20af: 00
20b0: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
20b7: b8 d5 08 00 00 mov $0x8d5,%eax
20bc: 21 04 24 and %eax,(%rsp)
20bf: 9c pushfq
20c0: f7 d0 not %eax
20c2: 21 04 24 and %eax,(%rsp)
20c5: 58 pop %rax
20c6: 09 04 24 or %eax,(%rsp)
20c9: 9d popfq
20ca: b8 d5 08 00 00 mov $0x8d5,%eax
20cf: f7 d0 not %eax
20d1: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
20d8: d2 a4 24 f8 00 00 00 shlb %cl,0xf8(%rsp)
20df: 9c pushfq
20e0: 58 pop %rax
20e1: 25 d5 08 00 00 and $0x8d5,%eax
20e6: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
20ed: e9 9b 09 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
20f2: 41 83 f9 04 cmp $0x4,%r9d
20f6: 74 5f je 2157 <x86_emulate_memop+0x20ca>
20f8: 41 83 f9 08 cmp $0x8,%r9d
20fc: 0f 84 9f 00 00 00 je 21a1 <x86_emulate_memop+0x2114>
2102: 41 83 f9 02 cmp $0x2,%r9d
2106: 0f 85 81 09 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
210c: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
2113: 00
2114: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
211b: b8 d5 08 00 00 mov $0x8d5,%eax
2120: 21 04 24 and %eax,(%rsp)
2123: 9c pushfq
2124: f7 d0 not %eax
2126: 21 04 24 and %eax,(%rsp)
2129: 58 pop %rax
212a: 09 04 24 or %eax,(%rsp)
212d: 9d popfq
212e: b8 d5 08 00 00 mov $0x8d5,%eax
2133: f7 d0 not %eax
2135: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
213c: 66 d3 a4 24 f8 00 00 shlw %cl,0xf8(%rsp)
2143: 00
2144: 9c pushfq
2145: 58 pop %rax
2146: 25 d5 08 00 00 and $0x8d5,%eax
214b: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
2152: e9 36 09 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
2157: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
215e: 00
215f: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
2166: b8 d5 08 00 00 mov $0x8d5,%eax
216b: 21 04 24 and %eax,(%rsp)
216e: 9c pushfq
216f: f7 d0 not %eax
2171: 21 04 24 and %eax,(%rsp)
2174: 58 pop %rax
2175: 09 04 24 or %eax,(%rsp)
2178: 9d popfq
2179: b8 d5 08 00 00 mov $0x8d5,%eax
217e: f7 d0 not %eax
2180: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
2187: d3 a4 24 f8 00 00 00 shll %cl,0xf8(%rsp)
218e: 9c pushfq
218f: 58 pop %rax
2190: 25 d5 08 00 00 and $0x8d5,%eax
2195: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
219c: e9 ec 08 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
21a1: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
21a8: 00
21a9: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
21b0: b8 d5 08 00 00 mov $0x8d5,%eax
21b5: 21 04 24 and %eax,(%rsp)
21b8: 9c pushfq
21b9: f7 d0 not %eax
21bb: 21 04 24 and %eax,(%rsp)
21be: 58 pop %rax
21bf: 09 04 24 or %eax,(%rsp)
21c2: 9d popfq
21c3: b8 d5 08 00 00 mov $0x8d5,%eax
21c8: f7 d0 not %eax
21ca: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
21d1: 48 d3 a4 24 f8 00 00 shlq %cl,0xf8(%rsp)
21d8: 00
21d9: 9c pushfq
21da: 58 pop %rax
21db: 25 d5 08 00 00 and $0x8d5,%eax
21e0: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
21e7: e9 a1 08 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 5: /* shr */
emulate_2op_SrcB("shr", src, dst, _eflags);
21ec: 41 83 f9 01 cmp $0x1,%r9d
21f0: 75 4a jne 223c <x86_emulate_memop+0x21af>
21f2: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
21f9: 00
21fa: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
2201: b8 d5 08 00 00 mov $0x8d5,%eax
2206: 21 04 24 and %eax,(%rsp)
2209: 9c pushfq
220a: f7 d0 not %eax
220c: 21 04 24 and %eax,(%rsp)
220f: 58 pop %rax
2210: 09 04 24 or %eax,(%rsp)
2213: 9d popfq
2214: b8 d5 08 00 00 mov $0x8d5,%eax
2219: f7 d0 not %eax
221b: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
2222: d2 ac 24 f8 00 00 00 shrb %cl,0xf8(%rsp)
2229: 9c pushfq
222a: 58 pop %rax
222b: 25 d5 08 00 00 and $0x8d5,%eax
2230: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
2237: e9 51 08 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
223c: 41 83 f9 04 cmp $0x4,%r9d
2240: 74 5f je 22a1 <x86_emulate_memop+0x2214>
2242: 41 83 f9 08 cmp $0x8,%r9d
2246: 0f 84 9f 00 00 00 je 22eb <x86_emulate_memop+0x225e>
224c: 41 83 f9 02 cmp $0x2,%r9d
2250: 0f 85 37 08 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
2256: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
225d: 00
225e: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
2265: b8 d5 08 00 00 mov $0x8d5,%eax
226a: 21 04 24 and %eax,(%rsp)
226d: 9c pushfq
226e: f7 d0 not %eax
2270: 21 04 24 and %eax,(%rsp)
2273: 58 pop %rax
2274: 09 04 24 or %eax,(%rsp)
2277: 9d popfq
2278: b8 d5 08 00 00 mov $0x8d5,%eax
227d: f7 d0 not %eax
227f: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
2286: 66 d3 ac 24 f8 00 00 shrw %cl,0xf8(%rsp)
228d: 00
228e: 9c pushfq
228f: 58 pop %rax
2290: 25 d5 08 00 00 and $0x8d5,%eax
2295: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
229c: e9 ec 07 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
22a1: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
22a8: 00
22a9: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
22b0: b8 d5 08 00 00 mov $0x8d5,%eax
22b5: 21 04 24 and %eax,(%rsp)
22b8: 9c pushfq
22b9: f7 d0 not %eax
22bb: 21 04 24 and %eax,(%rsp)
22be: 58 pop %rax
22bf: 09 04 24 or %eax,(%rsp)
22c2: 9d popfq
22c3: b8 d5 08 00 00 mov $0x8d5,%eax
22c8: f7 d0 not %eax
22ca: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
22d1: d3 ac 24 f8 00 00 00 shrl %cl,0xf8(%rsp)
22d8: 9c pushfq
22d9: 58 pop %rax
22da: 25 d5 08 00 00 and $0x8d5,%eax
22df: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
22e6: e9 a2 07 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
22eb: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
22f2: 00
22f3: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
22fa: b8 d5 08 00 00 mov $0x8d5,%eax
22ff: 21 04 24 and %eax,(%rsp)
2302: 9c pushfq
2303: f7 d0 not %eax
2305: 21 04 24 and %eax,(%rsp)
2308: 58 pop %rax
2309: 09 04 24 or %eax,(%rsp)
230c: 9d popfq
230d: b8 d5 08 00 00 mov $0x8d5,%eax
2312: f7 d0 not %eax
2314: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
231b: 48 d3 ac 24 f8 00 00 shrq %cl,0xf8(%rsp)
2322: 00
2323: 9c pushfq
2324: 58 pop %rax
2325: 25 d5 08 00 00 and $0x8d5,%eax
232a: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
2331: e9 57 07 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 7: /* sar */
emulate_2op_SrcB("sar", src, dst, _eflags);
2336: 41 83 f9 01 cmp $0x1,%r9d
233a: 75 4a jne 2386 <x86_emulate_memop+0x22f9>
233c: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
2343: 00
2344: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
234b: b8 d5 08 00 00 mov $0x8d5,%eax
2350: 21 04 24 and %eax,(%rsp)
2353: 9c pushfq
2354: f7 d0 not %eax
2356: 21 04 24 and %eax,(%rsp)
2359: 58 pop %rax
235a: 09 04 24 or %eax,(%rsp)
235d: 9d popfq
235e: b8 d5 08 00 00 mov $0x8d5,%eax
2363: f7 d0 not %eax
2365: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
236c: d2 bc 24 f8 00 00 00 sarb %cl,0xf8(%rsp)
2373: 9c pushfq
2374: 58 pop %rax
2375: 25 d5 08 00 00 and $0x8d5,%eax
237a: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
2381: e9 07 07 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
2386: 41 83 f9 04 cmp $0x4,%r9d
238a: 74 5f je 23eb <x86_emulate_memop+0x235e>
238c: 41 83 f9 08 cmp $0x8,%r9d
2390: 0f 84 9f 00 00 00 je 2435 <x86_emulate_memop+0x23a8>
2396: 41 83 f9 02 cmp $0x2,%r9d
239a: 0f 85 ed 06 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
23a0: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
23a7: 00
23a8: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
23af: b8 d5 08 00 00 mov $0x8d5,%eax
23b4: 21 04 24 and %eax,(%rsp)
23b7: 9c pushfq
23b8: f7 d0 not %eax
23ba: 21 04 24 and %eax,(%rsp)
23bd: 58 pop %rax
23be: 09 04 24 or %eax,(%rsp)
23c1: 9d popfq
23c2: b8 d5 08 00 00 mov $0x8d5,%eax
23c7: f7 d0 not %eax
23c9: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
23d0: 66 d3 bc 24 f8 00 00 sarw %cl,0xf8(%rsp)
23d7: 00
23d8: 9c pushfq
23d9: 58 pop %rax
23da: 25 d5 08 00 00 and $0x8d5,%eax
23df: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
23e6: e9 a2 06 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
23eb: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
23f2: 00
23f3: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
23fa: b8 d5 08 00 00 mov $0x8d5,%eax
23ff: 21 04 24 and %eax,(%rsp)
2402: 9c pushfq
2403: f7 d0 not %eax
2405: 21 04 24 and %eax,(%rsp)
2408: 58 pop %rax
2409: 09 04 24 or %eax,(%rsp)
240c: 9d popfq
240d: b8 d5 08 00 00 mov $0x8d5,%eax
2412: f7 d0 not %eax
2414: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
241b: d3 bc 24 f8 00 00 00 sarl %cl,0xf8(%rsp)
2422: 9c pushfq
2423: 58 pop %rax
2424: 25 d5 08 00 00 and $0x8d5,%eax
2429: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
2430: e9 58 06 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
2435: 48 8b 8c 24 18 01 00 mov 0x118(%rsp),%rcx
243c: 00
243d: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
2444: b8 d5 08 00 00 mov $0x8d5,%eax
2449: 21 04 24 and %eax,(%rsp)
244c: 9c pushfq
244d: f7 d0 not %eax
244f: 21 04 24 and %eax,(%rsp)
2452: 58 pop %rax
2453: 09 04 24 or %eax,(%rsp)
2456: 9d popfq
2457: b8 d5 08 00 00 mov $0x8d5,%eax
245c: f7 d0 not %eax
245e: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
2465: 48 d3 bc 24 f8 00 00 sarq %cl,0xf8(%rsp)
246c: 00
246d: 9c pushfq
246e: 58 pop %rax
246f: 25 d5 08 00 00 and $0x8d5,%eax
2474: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
247b: e9 0d 06 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
}
break;
case 0xd0 ... 0xd1: /* Grp2 */
src.val = 1;
2480: 48 c7 84 24 18 01 00 movq $0x1,0x118(%rsp)
2487: 00 01 00 00 00
248c: e9 ca f6 ff ff jmpq 1b5b <x86_emulate_memop+0x1ace>
goto grp2;
case 0xd2 ... 0xd3: /* Grp2 */
src.val = _regs[VCPU_REGS_RCX];
2491: 48 8b 44 24 78 mov 0x78(%rsp),%rax
2496: 48 89 84 24 18 01 00 mov %rax,0x118(%rsp)
249d: 00
249e: e9 b8 f6 ff ff jmpq 1b5b <x86_emulate_memop+0x1ace>
goto grp2;
case 0xf6 ... 0xf7: /* Grp3 */
switch (modrm_reg) {
24a3: 80 7c 24 20 02 cmpb $0x2,0x20(%rsp)
24a8: 0f 84 3b 01 00 00 je 25e9 <x86_emulate_memop+0x255c>
24ae: 72 10 jb 24c0 <x86_emulate_memop+0x2433>
24b0: 80 7c 24 20 03 cmpb $0x3,0x20(%rsp)
24b5: 0f 85 96 16 00 00 jne 3b51 <x86_emulate_memop+0x3ac4>
24bb: e9 36 01 00 00 jmpq 25f6 <x86_emulate_memop+0x2569>
case 0 ... 1: /* test */
/*
* Special case in Grp3: test has an immediate
* source operand.
*/
src.type = OP_IMM;
src.ptr = (unsigned long *)_eip;
src.bytes = (d & ByteOp) ? 1 : op_bytes;
24c0: f6 44 24 18 01 testb $0x1,0x18(%rsp)
24c5: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
24cc: 00
24cd: c7 84 24 10 01 00 00 movl $0x2,0x110(%rsp)
24d4: 02 00 00 00
24d8: 48 89 bc 24 28 01 00 mov %rdi,0x128(%rsp)
24df: 00
24e0: 0f 85 80 16 00 00 jne 3b66 <x86_emulate_memop+0x3ad9>
24e6: 8b 54 24 6c mov 0x6c(%rsp),%edx
if (src.bytes == 8)
24ea: 83 fa 08 cmp $0x8,%edx
24ed: 89 94 24 14 01 00 00 mov %edx,0x114(%rsp)
24f4: 75 10 jne 2506 <x86_emulate_memop+0x2479>
src.bytes = 4;
24f6: c7 84 24 14 01 00 00 movl $0x4,0x114(%rsp)
24fd: 04 00 00 00
2501: e9 97 00 00 00 jmpq 259d <x86_emulate_memop+0x2510>
switch (src.bytes) {
2506: 83 7c 24 6c 02 cmpl $0x2,0x6c(%rsp)
250b: 74 56 je 2563 <x86_emulate_memop+0x24d6>
250d: 83 7c 24 6c 04 cmpl $0x4,0x6c(%rsp)
2512: 0f 84 85 00 00 00 je 259d <x86_emulate_memop+0x2510>
2518: 83 7c 24 6c 01 cmpl $0x1,0x6c(%rsp)
251d: 0f 85 51 f3 ff ff jne 1874 <x86_emulate_memop+0x17e7>
case 1:
src.val = insn_fetch(s8, 1, _eip);
2523: 48 8b 1c 24 mov (%rsp),%rbx
2527: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
252e: 00
252f: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
2536: 00
2537: 49 03 7d 20 add 0x20(%r13),%rdi
253b: 4c 89 e9 mov %r13,%rcx
253e: ba 01 00 00 00 mov $0x1,%edx
2543: ff 13 callq *(%rbx)
2545: 85 c0 test %eax,%eax
2547: 41 89 c7 mov %eax,%r15d
254a: 0f 85 54 06 00 00 jne 2ba4 <x86_emulate_memop+0x2b17>
2550: 48 ff 84 24 50 01 00 incq 0x150(%rsp)
2557: 00
2558: 48 0f be 84 24 38 01 movsbq 0x138(%rsp),%rax
255f: 00 00
2561: eb 79 jmp 25dc <x86_emulate_memop+0x254f>
break;
case 2:
src.val = insn_fetch(s16, 2, _eip);
2563: 48 8b 2c 24 mov (%rsp),%rbp
2567: 49 03 7d 20 add 0x20(%r13),%rdi
256b: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
2572: 00
2573: 4c 89 e9 mov %r13,%rcx
2576: ba 02 00 00 00 mov $0x2,%edx
257b: ff 55 00 callq *0x0(%rbp)
257e: 85 c0 test %eax,%eax
2580: 41 89 c7 mov %eax,%r15d
2583: 0f 85 1b 06 00 00 jne 2ba4 <x86_emulate_memop+0x2b17>
2589: 48 83 84 24 50 01 00 addq $0x2,0x150(%rsp)
2590: 00 02
2592: 48 0f bf 84 24 38 01 movswq 0x138(%rsp),%rax
2599: 00 00
259b: eb 3f jmp 25dc <x86_emulate_memop+0x254f>
break;
case 4:
src.val = insn_fetch(s32, 4, _eip);
259d: 4c 8b 04 24 mov (%rsp),%r8
25a1: 48 8b bc 24 50 01 00 mov 0x150(%rsp),%rdi
25a8: 00
25a9: 48 8d b4 24 38 01 00 lea 0x138(%rsp),%rsi
25b0: 00
25b1: 49 03 7d 20 add 0x20(%r13),%rdi
25b5: 4c 89 e9 mov %r13,%rcx
25b8: ba 04 00 00 00 mov $0x4,%edx
25bd: 41 ff 10 callq *(%r8)
25c0: 85 c0 test %eax,%eax
25c2: 41 89 c7 mov %eax,%r15d
25c5: 0f 85 d9 05 00 00 jne 2ba4 <x86_emulate_memop+0x2b17>
25cb: 48 83 84 24 50 01 00 addq $0x4,0x150(%rsp)
25d2: 00 04
25d4: 48 63 84 24 38 01 00 movslq 0x138(%rsp),%rax
25db: 00
25dc: 48 89 84 24 18 01 00 mov %rax,0x118(%rsp)
25e3: 00
25e4: e9 8b f2 ff ff jmpq 1874 <x86_emulate_memop+0x17e7>
break;
}
goto test;
case 2: /* not */
dst.val = ~dst.val;
25e9: 48 f7 94 24 f8 00 00 notq 0xf8(%rsp)
25f0: 00
25f1: e9 97 04 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 3: /* neg */
emulate_1op("neg", dst, _eflags);
25f6: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
25fd: 83 f8 02 cmp $0x2,%eax
2600: 74 65 je 2667 <x86_emulate_memop+0x25da>
2602: 77 0a ja 260e <x86_emulate_memop+0x2581>
2604: ff c8 dec %eax
2606: 0f 85 81 04 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
260c: eb 17 jmp 2625 <x86_emulate_memop+0x2598>
260e: 83 f8 04 cmp $0x4,%eax
2611: 0f 84 93 00 00 00 je 26aa <x86_emulate_memop+0x261d>
2617: 83 f8 08 cmp $0x8,%eax
261a: 0f 85 6d 04 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
2620: e9 c7 00 00 00 jmpq 26ec <x86_emulate_memop+0x265f>
2625: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
262c: b8 d5 08 00 00 mov $0x8d5,%eax
2631: 21 04 24 and %eax,(%rsp)
2634: 9c pushfq
2635: f7 d0 not %eax
2637: 21 04 24 and %eax,(%rsp)
263a: 58 pop %rax
263b: 09 04 24 or %eax,(%rsp)
263e: 9d popfq
263f: b8 d5 08 00 00 mov $0x8d5,%eax
2644: f7 d0 not %eax
2646: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
264d: f6 9c 24 f8 00 00 00 negb 0xf8(%rsp)
2654: 9c pushfq
2655: 58 pop %rax
2656: 25 d5 08 00 00 and $0x8d5,%eax
265b: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
2662: e9 26 04 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
2667: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
266e: b8 d5 08 00 00 mov $0x8d5,%eax
2673: 21 04 24 and %eax,(%rsp)
2676: 9c pushfq
2677: f7 d0 not %eax
2679: 21 04 24 and %eax,(%rsp)
267c: 58 pop %rax
267d: 09 04 24 or %eax,(%rsp)
2680: 9d popfq
2681: b8 d5 08 00 00 mov $0x8d5,%eax
2686: f7 d0 not %eax
2688: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
268f: 66 f7 9c 24 f8 00 00 negw 0xf8(%rsp)
2696: 00
2697: 9c pushfq
2698: 58 pop %rax
2699: 25 d5 08 00 00 and $0x8d5,%eax
269e: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
26a5: e9 e3 03 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
26aa: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
26b1: b8 d5 08 00 00 mov $0x8d5,%eax
26b6: 21 04 24 and %eax,(%rsp)
26b9: 9c pushfq
26ba: f7 d0 not %eax
26bc: 21 04 24 and %eax,(%rsp)
26bf: 58 pop %rax
26c0: 09 04 24 or %eax,(%rsp)
26c3: 9d popfq
26c4: b8 d5 08 00 00 mov $0x8d5,%eax
26c9: f7 d0 not %eax
26cb: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
26d2: f7 9c 24 f8 00 00 00 negl 0xf8(%rsp)
26d9: 9c pushfq
26da: 58 pop %rax
26db: 25 d5 08 00 00 and $0x8d5,%eax
26e0: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
26e7: e9 a1 03 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
26ec: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
26f3: b8 d5 08 00 00 mov $0x8d5,%eax
26f8: 21 04 24 and %eax,(%rsp)
26fb: 9c pushfq
26fc: f7 d0 not %eax
26fe: 21 04 24 and %eax,(%rsp)
2701: 58 pop %rax
2702: 09 04 24 or %eax,(%rsp)
2705: 9d popfq
2706: b8 d5 08 00 00 mov $0x8d5,%eax
270b: f7 d0 not %eax
270d: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
2714: 48 f7 9c 24 f8 00 00 negq 0xf8(%rsp)
271b: 00
271c: 9c pushfq
271d: 58 pop %rax
271e: 25 d5 08 00 00 and $0x8d5,%eax
2723: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
272a: e9 5e 03 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
default:
goto cannot_emulate;
}
break;
case 0xfe ... 0xff: /* Grp4/Grp5 */
switch (modrm_reg) {
272f: 80 7c 24 20 01 cmpb $0x1,0x20(%rsp)
2734: 0f 84 4b 01 00 00 je 2885 <x86_emulate_memop+0x27f8>
273a: 72 10 jb 274c <x86_emulate_memop+0x26bf>
273c: 80 7c 24 20 06 cmpb $0x6,0x20(%rsp)
2741: 0f 85 0a 14 00 00 jne 3b51 <x86_emulate_memop+0x3ac4>
2747: e9 72 02 00 00 jmpq 29be <x86_emulate_memop+0x2931>
case 0: /* inc */
emulate_1op("inc", dst, _eflags);
274c: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
2753: 83 f8 02 cmp $0x2,%eax
2756: 74 65 je 27bd <x86_emulate_memop+0x2730>
2758: 77 0a ja 2764 <x86_emulate_memop+0x26d7>
275a: ff c8 dec %eax
275c: 0f 85 2b 03 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
2762: eb 17 jmp 277b <x86_emulate_memop+0x26ee>
2764: 83 f8 04 cmp $0x4,%eax
2767: 0f 84 93 00 00 00 je 2800 <x86_emulate_memop+0x2773>
276d: 83 f8 08 cmp $0x8,%eax
2770: 0f 85 17 03 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
2776: e9 c7 00 00 00 jmpq 2842 <x86_emulate_memop+0x27b5>
277b: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
2782: b8 d5 08 00 00 mov $0x8d5,%eax
2787: 21 04 24 and %eax,(%rsp)
278a: 9c pushfq
278b: f7 d0 not %eax
278d: 21 04 24 and %eax,(%rsp)
2790: 58 pop %rax
2791: 09 04 24 or %eax,(%rsp)
2794: 9d popfq
2795: b8 d5 08 00 00 mov $0x8d5,%eax
279a: f7 d0 not %eax
279c: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
27a3: fe 84 24 f8 00 00 00 incb 0xf8(%rsp)
27aa: 9c pushfq
27ab: 58 pop %rax
27ac: 25 d5 08 00 00 and $0x8d5,%eax
27b1: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
27b8: e9 d0 02 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
27bd: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
27c4: b8 d5 08 00 00 mov $0x8d5,%eax
27c9: 21 04 24 and %eax,(%rsp)
27cc: 9c pushfq
27cd: f7 d0 not %eax
27cf: 21 04 24 and %eax,(%rsp)
27d2: 58 pop %rax
27d3: 09 04 24 or %eax,(%rsp)
27d6: 9d popfq
27d7: b8 d5 08 00 00 mov $0x8d5,%eax
27dc: f7 d0 not %eax
27de: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
27e5: 66 ff 84 24 f8 00 00 incw 0xf8(%rsp)
27ec: 00
27ed: 9c pushfq
27ee: 58 pop %rax
27ef: 25 d5 08 00 00 and $0x8d5,%eax
27f4: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
27fb: e9 8d 02 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
2800: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
2807: b8 d5 08 00 00 mov $0x8d5,%eax
280c: 21 04 24 and %eax,(%rsp)
280f: 9c pushfq
2810: f7 d0 not %eax
2812: 21 04 24 and %eax,(%rsp)
2815: 58 pop %rax
2816: 09 04 24 or %eax,(%rsp)
2819: 9d popfq
281a: b8 d5 08 00 00 mov $0x8d5,%eax
281f: f7 d0 not %eax
2821: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
2828: ff 84 24 f8 00 00 00 incl 0xf8(%rsp)
282f: 9c pushfq
2830: 58 pop %rax
2831: 25 d5 08 00 00 and $0x8d5,%eax
2836: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
283d: e9 4b 02 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
2842: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
2849: b8 d5 08 00 00 mov $0x8d5,%eax
284e: 21 04 24 and %eax,(%rsp)
2851: 9c pushfq
2852: f7 d0 not %eax
2854: 21 04 24 and %eax,(%rsp)
2857: 58 pop %rax
2858: 09 04 24 or %eax,(%rsp)
285b: 9d popfq
285c: b8 d5 08 00 00 mov $0x8d5,%eax
2861: f7 d0 not %eax
2863: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
286a: 48 ff 84 24 f8 00 00 incq 0xf8(%rsp)
2871: 00
2872: 9c pushfq
2873: 58 pop %rax
2874: 25 d5 08 00 00 and $0x8d5,%eax
2879: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
2880: e9 08 02 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 1: /* dec */
emulate_1op("dec", dst, _eflags);
2885: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
288c: 83 f8 02 cmp $0x2,%eax
288f: 74 65 je 28f6 <x86_emulate_memop+0x2869>
2891: 77 0a ja 289d <x86_emulate_memop+0x2810>
2893: ff c8 dec %eax
2895: 0f 85 f2 01 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
289b: eb 17 jmp 28b4 <x86_emulate_memop+0x2827>
289d: 83 f8 04 cmp $0x4,%eax
28a0: 0f 84 93 00 00 00 je 2939 <x86_emulate_memop+0x28ac>
28a6: 83 f8 08 cmp $0x8,%eax
28a9: 0f 85 de 01 00 00 jne 2a8d <x86_emulate_memop+0x2a00>
28af: e9 c7 00 00 00 jmpq 297b <x86_emulate_memop+0x28ee>
28b4: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
28bb: b8 d5 08 00 00 mov $0x8d5,%eax
28c0: 21 04 24 and %eax,(%rsp)
28c3: 9c pushfq
28c4: f7 d0 not %eax
28c6: 21 04 24 and %eax,(%rsp)
28c9: 58 pop %rax
28ca: 09 04 24 or %eax,(%rsp)
28cd: 9d popfq
28ce: b8 d5 08 00 00 mov $0x8d5,%eax
28d3: f7 d0 not %eax
28d5: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
28dc: fe 8c 24 f8 00 00 00 decb 0xf8(%rsp)
28e3: 9c pushfq
28e4: 58 pop %rax
28e5: 25 d5 08 00 00 and $0x8d5,%eax
28ea: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
28f1: e9 97 01 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
28f6: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
28fd: b8 d5 08 00 00 mov $0x8d5,%eax
2902: 21 04 24 and %eax,(%rsp)
2905: 9c pushfq
2906: f7 d0 not %eax
2908: 21 04 24 and %eax,(%rsp)
290b: 58 pop %rax
290c: 09 04 24 or %eax,(%rsp)
290f: 9d popfq
2910: b8 d5 08 00 00 mov $0x8d5,%eax
2915: f7 d0 not %eax
2917: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
291e: 66 ff 8c 24 f8 00 00 decw 0xf8(%rsp)
2925: 00
2926: 9c pushfq
2927: 58 pop %rax
2928: 25 d5 08 00 00 and $0x8d5,%eax
292d: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
2934: e9 54 01 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
2939: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
2940: b8 d5 08 00 00 mov $0x8d5,%eax
2945: 21 04 24 and %eax,(%rsp)
2948: 9c pushfq
2949: f7 d0 not %eax
294b: 21 04 24 and %eax,(%rsp)
294e: 58 pop %rax
294f: 09 04 24 or %eax,(%rsp)
2952: 9d popfq
2953: b8 d5 08 00 00 mov $0x8d5,%eax
2958: f7 d0 not %eax
295a: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
2961: ff 8c 24 f8 00 00 00 decl 0xf8(%rsp)
2968: 9c pushfq
2969: 58 pop %rax
296a: 25 d5 08 00 00 and $0x8d5,%eax
296f: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
2976: e9 12 01 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
297b: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
2982: b8 d5 08 00 00 mov $0x8d5,%eax
2987: 21 04 24 and %eax,(%rsp)
298a: 9c pushfq
298b: f7 d0 not %eax
298d: 21 04 24 and %eax,(%rsp)
2990: 58 pop %rax
2991: 09 04 24 or %eax,(%rsp)
2994: 9d popfq
2995: b8 d5 08 00 00 mov $0x8d5,%eax
299a: f7 d0 not %eax
299c: 21 84 24 48 01 00 00 and %eax,0x148(%rsp)
29a3: 48 ff 8c 24 f8 00 00 decq 0xf8(%rsp)
29aa: 00
29ab: 9c pushfq
29ac: 58 pop %rax
29ad: 25 d5 08 00 00 and $0x8d5,%eax
29b2: 09 84 24 48 01 00 00 or %eax,0x148(%rsp)
29b9: e9 cf 00 00 00 jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 6: /* push */
/* 64-bit mode: PUSH always pushes a 64-bit operand. */
if (mode == X86EMUL_MODE_PROT64) {
29be: 83 7c 24 54 08 cmpl $0x8,0x54(%rsp)
29c3: 75 34 jne 29f9 <x86_emulate_memop+0x296c>
dst.bytes = 8;
if ((rc = ops->read_std((unsigned long)dst.ptr,
29c5: 48 8b 1c 24 mov (%rsp),%rbx
29c9: c7 84 24 f4 00 00 00 movl $0x8,0xf4(%rsp)
29d0: 08 00 00 00
29d4: 48 8d b4 24 f8 00 00 lea 0xf8(%rsp),%rsi
29db: 00
29dc: 48 8b bc 24 08 01 00 mov 0x108(%rsp),%rdi
29e3: 00
29e4: 4c 89 e9 mov %r13,%rcx
29e7: ba 08 00 00 00 mov $0x8,%edx
29ec: ff 13 callq *(%rbx)
29ee: 85 c0 test %eax,%eax
29f0: 41 89 c7 mov %eax,%r15d
29f3: 0f 85 ab 01 00 00 jne 2ba4 <x86_emulate_memop+0x2b17>
&dst.val, 8,
ctxt)) != 0)
goto done;
}
register_address_increment(_regs[VCPU_REGS_RSP],
29f9: 44 8b 84 24 f4 00 00 mov 0xf4(%rsp),%r8d
2a00: 00
2a01: 48 8b 2c 24 mov (%rsp),%rbp
2a05: 49 8b 7d 38 mov 0x38(%r13),%rdi
2a09: 44 89 c0 mov %r8d,%eax
2a0c: 4c 8b 4d 08 mov 0x8(%rbp),%r9
2a10: f7 d8 neg %eax
2a12: 83 7c 24 48 08 cmpl $0x8,0x48(%rsp)
2a17: 48 63 d0 movslq %eax,%rdx
2a1a: 75 15 jne 2a31 <x86_emulate_memop+0x29a4>
2a1c: 48 89 d0 mov %rdx,%rax
2a1f: 48 03 84 24 90 00 00 add 0x90(%rsp),%rax
2a26: 00
2a27: 48 89 84 24 90 00 00 mov %rax,0x90(%rsp)
2a2e: 00
2a2f: eb 38 jmp 2a69 <x86_emulate_memop+0x29dc>
2a31: 8b 4c 24 48 mov 0x48(%rsp),%ecx
2a35: 48 8b b4 24 90 00 00 mov 0x90(%rsp),%rsi
2a3c: 00
2a3d: b8 01 00 00 00 mov $0x1,%eax
2a42: c1 e1 03 shl $0x3,%ecx
2a45: 48 01 f2 add %rsi,%rdx
2a48: 48 d3 e0 shl %cl,%rax
2a4b: 48 8d 48 ff lea 0xffffffffffffffff(%rax),%rcx
2a4f: 48 f7 d8 neg %rax
2a52: 48 21 f0 and %rsi,%rax
2a55: 48 21 ca and %rcx,%rdx
2a58: 48 09 c2 or %rax,%rdx
-dst.bytes);
if ((rc = ops->write_std(
2a5b: 48 89 d0 mov %rdx,%rax
2a5e: 48 89 94 24 90 00 00 mov %rdx,0x90(%rsp)
2a65: 00
2a66: 48 21 c8 and %rcx,%rax
2a69: 48 8d b4 24 f8 00 00 lea 0xf8(%rsp),%rsi
2a70: 00
2a71: 48 01 c7 add %rax,%rdi
2a74: 4c 89 e9 mov %r13,%rcx
2a77: 44 89 c2 mov %r8d,%edx
2a7a: 41 ff d1 callq *%r9
2a7d: e9 e5 00 00 00 jmpq 2b67 <x86_emulate_memop+0x2ada>
register_address(ctxt->ss_base,
_regs[VCPU_REGS_RSP]),
&dst.val, dst.bytes, ctxt)) != 0)
goto done;
no_wb = 1;
break;
default:
goto cannot_emulate;
}
break;
}
writeback:
if (!no_wb) {
switch (dst.type) {
case OP_REG:
/* The 4-byte case *is* correct: in 64-bit mode we zero-extend. */
switch (dst.bytes) {
case 1:
*(u8 *)dst.ptr = (u8)dst.val;
break;
case 2:
*(u16 *)dst.ptr = (u16)dst.val;
break;
case 4:
*dst.ptr = (u32)dst.val;
break; /* 64b: zero-ext */
case 8:
*dst.ptr = dst.val;
break;
}
break;
case OP_MEM:
if (lock_prefix)
rc = ops->cmpxchg_emulated((unsigned long)dst.
ptr, &dst.orig_val,
&dst.val, dst.bytes,
ctxt);
else
rc = ops->write_emulated((unsigned long)dst.ptr,
&dst.val, dst.bytes,
ctxt);
if (rc != 0)
goto done;
default:
break;
}
}
/* Commit shadow register state. */
memcpy(ctxt->vcpu->regs, _regs, sizeof _regs);
ctxt->eflags = _eflags;
ctxt->vcpu->rip = _eip;
done:
return (rc == X86EMUL_UNHANDLEABLE) ? -1 : 0;
special_insn:
if (twobyte)
goto twobyte_special_insn;
if (rep_prefix) {
if (_regs[VCPU_REGS_RCX] == 0) {
ctxt->vcpu->rip = _eip;
goto done;
}
_regs[VCPU_REGS_RCX]--;
_eip = ctxt->vcpu->rip;
}
switch (b) {
case 0xa4 ... 0xa5: /* movs */
dst.type = OP_MEM;
dst.bytes = (d & ByteOp) ? 1 : op_bytes;
dst.ptr = (unsigned long *)register_address(ctxt->es_base,
_regs[VCPU_REGS_RDI]);
if ((rc = ops->read_emulated(register_address(
override_base ? *override_base : ctxt->ds_base,
_regs[VCPU_REGS_RSI]), &dst.val, dst.bytes, ctxt)) != 0)
goto done;
register_address_increment(_regs[VCPU_REGS_RSI],
(_eflags & EFLG_DF) ? -dst.bytes : dst.bytes);
register_address_increment(_regs[VCPU_REGS_RDI],
(_eflags & EFLG_DF) ? -dst.bytes : dst.bytes);
break;
case 0xa6 ... 0xa7: /* cmps */
DPRINTF("Urk! I don't handle CMPS.\n");
goto cannot_emulate;
case 0xaa ... 0xab: /* stos */
dst.type = OP_MEM;
dst.bytes = (d & ByteOp) ? 1 : op_bytes;
dst.ptr = (unsigned long *)cr2;
dst.val = _regs[VCPU_REGS_RAX];
register_address_increment(_regs[VCPU_REGS_RDI],
(_eflags & EFLG_DF) ? -dst.bytes : dst.bytes);
break;
case 0xac ... 0xad: /* lods */
dst.type = OP_REG;
dst.bytes = (d & ByteOp) ? 1 : op_bytes;
dst.ptr = (unsigned long *)&_regs[VCPU_REGS_RAX];
if ((rc = ops->read_emulated(cr2, &dst.val, dst.bytes, ctxt)) != 0)
goto done;
register_address_increment(_regs[VCPU_REGS_RSI],
(_eflags & EFLG_DF) ? -dst.bytes : dst.bytes);
break;
case 0xae ... 0xaf: /* scas */
DPRINTF("Urk! I don't handle SCAS.\n");
goto cannot_emulate;
case 0xf4: /* hlt */
ctxt->vcpu->halt_request = 1;
goto done;
case 0xc3: /* ret */
dst.ptr = &_eip;
goto pop_instruction;
case 0x58 ... 0x5f: /* pop reg */
dst.ptr = (unsigned long *)&_regs[b & 0x7];
pop_instruction:
if ((rc = ops->read_std(register_address(ctxt->ss_base,
_regs[VCPU_REGS_RSP]), dst.ptr, op_bytes, ctxt)) != 0)
goto done;
register_address_increment(_regs[VCPU_REGS_RSP], op_bytes);
no_wb = 1; /* Disable writeback. */
break;
}
goto writeback;
twobyte_insn:
switch (b) {
case 0x01: /* lgdt, lidt, lmsw */
switch (modrm_reg) {
u16 size;
unsigned long address;
case 2: /* lgdt */
rc = read_descriptor(ctxt, ops, src.ptr,
&size, &address, op_bytes);
if (rc)
goto done;
realmode_lgdt(ctxt->vcpu, size, address);
break;
case 3: /* lidt */
rc = read_descriptor(ctxt, ops, src.ptr,
&size, &address, op_bytes);
if (rc)
goto done;
realmode_lidt(ctxt->vcpu, size, address);
break;
case 4: /* smsw */
if (modrm_mod != 3)
goto cannot_emulate;
*(u16 *)&_regs[modrm_rm]
= realmode_get_cr(ctxt->vcpu, 0);
break;
case 6: /* lmsw */
if (modrm_mod != 3)
goto cannot_emulate;
realmode_lmsw(ctxt->vcpu, (u16)modrm_val, &_eflags);
break;
case 7: /* invlpg*/
emulate_invlpg(ctxt->vcpu, cr2);
break;
default:
goto cannot_emulate;
}
break;
case 0x21: /* mov from dr to reg */
if (modrm_mod != 3)
goto cannot_emulate;
rc = emulator_get_dr(ctxt, modrm_reg, &_regs[modrm_rm]);
break;
case 0x23: /* mov from reg to dr */
if (modrm_mod != 3)
goto cannot_emulate;
rc = emulator_set_dr(ctxt, modrm_reg, _regs[modrm_rm]);
break;
case 0x40 ... 0x4f: /* cmov */
dst.val = dst.orig_val = src.val;
no_wb = 1;
/*
* First, assume we're decoding an even cmov opcode
* (lsb == 0).
*/
switch ((b & 15) >> 1) {
case 0: /* cmovo */
no_wb = (_eflags & EFLG_OF) ? 0 : 1;
break;
case 1: /* cmovb/cmovc/cmovnae */
no_wb = (_eflags & EFLG_CF) ? 0 : 1;
break;
case 2: /* cmovz/cmove */
no_wb = (_eflags & EFLG_ZF) ? 0 : 1;
break;
case 3: /* cmovbe/cmovna */
no_wb = (_eflags & (EFLG_CF | EFLG_ZF)) ? 0 : 1;
break;
case 4: /* cmovs */
no_wb = (_eflags & EFLG_SF) ? 0 : 1;
break;
case 5: /* cmovp/cmovpe */
no_wb = (_eflags & EFLG_PF) ? 0 : 1;
break;
case 7: /* cmovle/cmovng */
no_wb = (_eflags & EFLG_ZF) ? 0 : 1;
/* fall through */
case 6: /* cmovl/cmovnge */
no_wb &= (!(_eflags & EFLG_SF) !=
!(_eflags & EFLG_OF)) ? 0 : 1;
break;
}
/* Odd cmov opcodes (lsb == 1) have inverted sense. */
no_wb ^= b & 1;
2a82: 83 e1 01 and $0x1,%ecx
2a85: 39 ca cmp %ecx,%edx
2a87: 0f 85 e1 00 00 00 jne 2b6e <x86_emulate_memop+0x2ae1>
2a8d: 8b 84 24 f0 00 00 00 mov 0xf0(%rsp),%eax
2a94: 85 c0 test %eax,%eax
2a96: 74 0a je 2aa2 <x86_emulate_memop+0x2a15>
2a98: ff c8 dec %eax
2a9a: 0f 85 ce 00 00 00 jne 2b6e <x86_emulate_memop+0x2ae1>
2aa0: eb 7e jmp 2b20 <x86_emulate_memop+0x2a93>
2aa2: 8b 84 24 f4 00 00 00 mov 0xf4(%rsp),%eax
2aa9: 83 f8 02 cmp $0x2,%eax
2aac: 74 33 je 2ae1 <x86_emulate_memop+0x2a54>
2aae: 77 0a ja 2aba <x86_emulate_memop+0x2a2d>
2ab0: ff c8 dec %eax
2ab2: 0f 85 b6 00 00 00 jne 2b6e <x86_emulate_memop+0x2ae1>
2ab8: eb 10 jmp 2aca <x86_emulate_memop+0x2a3d>
2aba: 83 f8 04 cmp $0x4,%eax
2abd: 74 37 je 2af6 <x86_emulate_memop+0x2a69>
2abf: 83 f8 08 cmp $0x8,%eax
2ac2: 0f 85 a6 00 00 00 jne 2b6e <x86_emulate_memop+0x2ae1>
2ac8: eb 41 jmp 2b0b <x86_emulate_memop+0x2a7e>
2aca: 48 8b 94 24 f8 00 00 mov 0xf8(%rsp),%rdx
2ad1: 00
2ad2: 48 8b 84 24 08 01 00 mov 0x108(%rsp),%rax
2ad9: 00
2ada: 88 10 mov %dl,(%rax)
2adc: e9 8d 00 00 00 jmpq 2b6e <x86_emulate_memop+0x2ae1>
2ae1: 48 8b 94 24 f8 00 00 mov 0xf8(%rsp),%rdx
2ae8: 00
2ae9: 48 8b 84 24 08 01 00 mov 0x108(%rsp),%rax
2af0: 00
2af1: 66 89 10 mov %dx,(%rax)
2af4: eb 78 jmp 2b6e <x86_emulate_memop+0x2ae1>
2af6: 44 8b 84 24 f8 00 00 mov 0xf8(%rsp),%r8d
2afd: 00
2afe: 48 8b 84 24 08 01 00 mov 0x108(%rsp),%rax
2b05: 00
2b06: 4c 89 00 mov %r8,(%rax)
2b09: eb 63 jmp 2b6e <x86_emulate_memop+0x2ae1>
2b0b: 48 8b 94 24 f8 00 00 mov 0xf8(%rsp),%rdx
2b12: 00
2b13: 48 8b 84 24 08 01 00 mov 0x108(%rsp),%rax
2b1a: 00
2b1b: 48 89 10 mov %rdx,(%rax)
2b1e: eb 4e jmp 2b6e <x86_emulate_memop+0x2ae1>
2b20: 83 7c 24 4c 00 cmpl $0x0,0x4c(%rsp)
2b25: 44 8b 8c 24 f4 00 00 mov 0xf4(%rsp),%r9d
2b2c: 00
2b2d: 48 8d 84 24 f0 00 00 lea 0xf0(%rsp),%rax
2b34: 00
2b35: 48 8b bc 24 08 01 00 mov 0x108(%rsp),%rdi
2b3c: 00
2b3d: 74 17 je 2b56 <x86_emulate_memop+0x2ac9>
2b3f: 48 8b 1c 24 mov (%rsp),%rbx
2b43: 48 8d 50 08 lea 0x8(%rax),%rdx
2b47: 48 8d 70 10 lea 0x10(%rax),%rsi
2b4b: 4d 89 e8 mov %r13,%r8
2b4e: 44 89 c9 mov %r9d,%ecx
2b51: ff 53 20 callq *0x20(%rbx)
2b54: eb 11 jmp 2b67 <x86_emulate_memop+0x2ada>
2b56: 48 8b 2c 24 mov (%rsp),%rbp
2b5a: 48 8d 70 08 lea 0x8(%rax),%rsi
2b5e: 4c 89 e9 mov %r13,%rcx
2b61: 44 89 ca mov %r9d,%edx
2b64: ff 55 18 callq *0x18(%rbp)
2b67: 85 c0 test %eax,%eax
2b69: 41 89 c7 mov %eax,%r15d
2b6c: 75 36 jne 2ba4 <x86_emulate_memop+0x2b17>
2b6e: 49 8b 7d 00 mov 0x0(%r13),%rdi
2b72: 48 8d 74 24 70 lea 0x70(%rsp),%rsi
2b77: ba 80 00 00 00 mov $0x80,%edx
2b7c: 48 83 ef 80 sub $0xffffffffffffff80,%rdi
2b80: e8 00 00 00 00 callq 2b85 <x86_emulate_memop+0x2af8>
2b81: R_X86_64_PC32 __memcpy+0xfffffffffffffffc
2b85: 48 8b 84 24 48 01 00 mov 0x148(%rsp),%rax
2b8c: 00
2b8d: 49 8b 55 00 mov 0x0(%r13),%rdx
2b91: 49 89 45 08 mov %rax,0x8(%r13)
2b95: 48 8b 84 24 50 01 00 mov 0x150(%rsp),%rax
2b9c: 00
2b9d: 48 89 82 00 01 00 00 mov %rax,0x100(%rdx)
2ba4: 41 ff cf dec %r15d
2ba7: 0f 84 a4 0f 00 00 je 3b51 <x86_emulate_memop+0x3ac4>
2bad: 31 c0 xor %eax,%eax
2baf: e9 ca 0f 00 00 jmpq 3b7e <x86_emulate_memop+0x3af1>
2bb4: 80 7c 24 1d 00 cmpb $0x0,0x1d(%rsp)
2bb9: 0f 85 aa 0d 00 00 jne 3969 <x86_emulate_memop+0x38dc>
2bbf: 83 7c 24 50 00 cmpl $0x0,0x50(%rsp)
2bc4: 74 36 je 2bfc <x86_emulate_memop+0x2b6f>
2bc6: 48 8b 44 24 78 mov 0x78(%rsp),%rax
2bcb: 49 8b 55 00 mov 0x0(%r13),%rdx
2bcf: 48 85 c0 test %rax,%rax
2bd2: 75 11 jne 2be5 <x86_emulate_memop+0x2b58>
2bd4: 48 8b 84 24 50 01 00 mov 0x150(%rsp),%rax
2bdb: 00
2bdc: 48 89 82 00 01 00 00 mov %rax,0x100(%rdx)
2be3: eb c8 jmp 2bad <x86_emulate_memop+0x2b20>
2be5: 48 ff c8 dec %rax
2be8: 48 89 44 24 78 mov %rax,0x78(%rsp)
2bed: 48 8b 82 00 01 00 00 mov 0x100(%rdx),%rax
2bf4: 48 89 84 24 50 01 00 mov %rax,0x150(%rsp)
2bfb: 00
2bfc: 40 80 fd ab cmp $0xab,%bpl
2c00: 77 35 ja 2c37 <x86_emulate_memop+0x2baa>
2c02: 40 80 fd aa cmp $0xaa,%bpl
2c06: 0f 83 e7 01 00 00 jae 2df3 <x86_emulate_memop+0x2d66>
2c0c: 40 80 fd a5 cmp $0xa5,%bpl
2c10: 77 16 ja 2c28 <x86_emulate_memop+0x2b9b>
2c12: 40 80 fd a4 cmp $0xa4,%bpl
2c16: 73 4d jae 2c65 <x86_emulate_memop+0x2bd8>
2c18: 8d 45 a8 lea 0xffffffffffffffa8(%rbp),%eax
2c1b: 3c 07 cmp $0x7,%al
2c1d: 0f 87 6a fe ff ff ja 2a8d <x86_emulate_memop+0x2a00>
2c23: e9 40 03 00 00 jmpq 2f68 <x86_emulate_memop+0x2edb>
2c28: 40 80 fd a7 cmp $0xa7,%bpl
2c2c: 0f 87 5b fe ff ff ja 2a8d <x86_emulate_memop+0x2a00>
2c32: e9 1a 0f 00 00 jmpq 3b51 <x86_emulate_memop+0x3ac4>
2c37: 40 80 fd af cmp $0xaf,%bpl
2c3b: 77 0f ja 2c4c <x86_emulate_memop+0x2bbf>
2c3d: 40 80 fd ae cmp $0xae,%bpl
2c41: 0f 83 0a 0f 00 00 jae 3b51 <x86_emulate_memop+0x3ac4>
2c47: e9 42 02 00 00 jmpq 2e8e <x86_emulate_memop+0x2e01>
2c4c: 40 80 fd c3 cmp $0xc3,%bpl
2c50: 0f 84 08 03 00 00 je 2f5e <x86_emulate_memop+0x2ed1>
2c56: 40 80 fd f4 cmp $0xf4,%bpl
2c5a: 0f 85 2d fe ff ff jne 2a8d <x86_emulate_memop+0x2a00>
2c60: e9 e6 02 00 00 jmpq 2f4b <x86_emulate_memop+0x2ebe>
2c65: f6 44 24 18 01 testb $0x1,0x18(%rsp)
2c6a: b8 01 00 00 00 mov $0x1,%eax
2c6f: c7 84 24 f0 00 00 00 movl $0x1,0xf0(%rsp)
2c76: 01 00 00 00
2c7a: 0f 44 44 24 6c cmove 0x6c(%rsp),%eax
2c7f: 83 7c 24 48 08 cmpl $0x8,0x48(%rsp)
2c84: 49 8b 55 30 mov 0x30(%r13),%rdx
2c88: 89 84 24 f4 00 00 00 mov %eax,0xf4(%rsp)
2c8f: 75 0a jne 2c9b <x86_emulate_memop+0x2c0e>
2c91: 48 8b 84 24 a8 00 00 mov 0xa8(%rsp),%rax
2c98: 00
2c99: eb 1a jmp 2cb5 <x86_emulate_memop+0x2c28>
2c9b: 8b 4c 24 48 mov 0x48(%rsp),%ecx
2c9f: b8 01 00 00 00 mov $0x1,%eax
2ca4: c1 e1 03 shl $0x3,%ecx
2ca7: 48 d3 e0 shl %cl,%rax
2caa: 48 ff c8 dec %rax
2cad: 48 23 84 24 a8 00 00 and 0xa8(%rsp),%rax
2cb4: 00
2cb5: 48 01 d0 add %rdx,%rax
2cb8: 48 83 7c 24 40 00 cmpq $0x0,0x40(%rsp)
2cbe: 8b 94 24 f4 00 00 00 mov 0xf4(%rsp),%edx
2cc5: 48 89 84 24 08 01 00 mov %rax,0x108(%rsp)
2ccc: 00
2ccd: 48 8b 04 24 mov (%rsp),%rax
2cd1: 4c 8b 40 10 mov 0x10(%rax),%r8
2cd5: 74 0a je 2ce1 <x86_emulate_memop+0x2c54>
2cd7: 48 8b 4c 24 40 mov 0x40(%rsp),%rcx
2cdc: 48 8b 39 mov (%rcx),%rdi
2cdf: eb 04 jmp 2ce5 <x86_emulate_memop+0x2c58>
2ce1: 49 8b 7d 28 mov 0x28(%r13),%rdi
2ce5: 83 7c 24 48 08 cmpl $0x8,0x48(%rsp)
2cea: 75 0a jne 2cf6 <x86_emulate_memop+0x2c69>
2cec: 48 8b 84 24 a0 00 00 mov 0xa0(%rsp),%rax
2cf3: 00
2cf4: eb 1a jmp 2d10 <x86_emulate_memop+0x2c83>
2cf6: 8b 4c 24 48 mov 0x48(%rsp),%ecx
2cfa: b8 01 00 00 00 mov $0x1,%eax
2cff: c1 e1 03 shl $0x3,%ecx
2d02: 48 d3 e0 shl %cl,%rax
2d05: 48 ff c8 dec %rax
2d08: 48 23 84 24 a0 00 00 and 0xa0(%rsp),%rax
2d0f: 00
2d10: 48 8d b4 24 f8 00 00 lea 0xf8(%rsp),%rsi
2d17: 00
2d18: 48 8d 3c 38 lea (%rax,%rdi,1),%rdi
2d1c: 4c 89 e9 mov %r13,%rcx
2d1f: 41 ff d0 callq *%r8
2d22: 85 c0 test %eax,%eax
2d24: 41 89 c7 mov %eax,%r15d
2d27: 0f 85 77 fe ff ff jne 2ba4 <x86_emulate_memop+0x2b17>
2d2d: 44 8b 8c 24 f4 00 00 mov 0xf4(%rsp),%r9d
2d34: 00
2d35: 44 89 c8 mov %r9d,%eax
2d38: f7 d8 neg %eax
2d3a: f6 84 24 49 01 00 00 testb $0x4,0x149(%rsp)
2d41: 04
2d42: 44 0f 45 c8 cmovne %eax,%r9d
2d46: 83 7c 24 48 08 cmpl $0x8,0x48(%rsp)
2d4b: 49 63 d1 movslq %r9d,%rdx
2d4e: 75 0a jne 2d5a <x86_emulate_memop+0x2ccd>
2d50: 48 01 94 24 a0 00 00 add %rdx,0xa0(%rsp)
2d57: 00
2d58: eb 32 jmp 2d8c <x86_emulate_memop+0x2cff>
2d5a: 8b 4c 24 48 mov 0x48(%rsp),%ecx
2d5e: 48 8b b4 24 a0 00 00 mov 0xa0(%rsp),%rsi
2d65: 00
2d66: b8 01 00 00 00 mov $0x1,%eax
2d6b: c1 e1 03 shl $0x3,%ecx
2d6e: 48 01 f2 add %rsi,%rdx
2d71: 48 d3 e0 shl %cl,%rax
2d74: 48 8d 48 ff lea 0xffffffffffffffff(%rax),%rcx
2d78: 48 f7 d8 neg %rax
2d7b: 48 21 f0 and %rsi,%rax
2d7e: 48 21 ca and %rcx,%rdx
2d81: 48 09 c2 or %rax,%rdx
2d84: 48 89 94 24 a0 00 00 mov %rdx,0xa0(%rsp)
2d8b: 00
2d8c: 44 8b 8c 24 f4 00 00 mov 0xf4(%rsp),%r9d
2d93: 00
2d94: 44 89 c8 mov %r9d,%eax
2d97: f7 d8 neg %eax
2d99: f6 84 24 49 01 00 00 testb $0x4,0x149(%rsp)
2da0: 04
2da1: 44 0f 45 c8 cmovne %eax,%r9d
2da5: 83 7c 24 48 08 cmpl $0x8,0x48(%rsp)
2daa: 49 63 d1 movslq %r9d,%rdx
2dad: 75 0d jne 2dbc <x86_emulate_memop+0x2d2f>
2daf: 48 01 94 24 a8 00 00 add %rdx,0xa8(%rsp)
2db6: 00
2db7: e9 d1 fc ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
2dbc: 8b 4c 24 48 mov 0x48(%rsp),%ecx
2dc0: 48 8b b4 24 a8 00 00 mov 0xa8(%rsp),%rsi
2dc7: 00
2dc8: b8 01 00 00 00 mov $0x1,%eax
2dcd: c1 e1 03 shl $0x3,%ecx
2dd0: 48 01 f2 add %rsi,%rdx
2dd3: 48 d3 e0 shl %cl,%rax
2dd6: 48 8d 48 ff lea 0xffffffffffffffff(%rax),%rcx
2dda: 48 f7 d8 neg %rax
2ddd: 48 21 f0 and %rsi,%rax
2de0: 48 21 ca and %rcx,%rdx
2de3: 48 09 c2 or %rax,%rdx
2de6: 48 89 94 24 a8 00 00 mov %rdx,0xa8(%rsp)
2ded: 00
2dee: e9 9a fc ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
2df3: f6 44 24 18 01 testb $0x1,0x18(%rsp)
2df8: b8 01 00 00 00 mov $0x1,%eax
2dfd: c7 84 24 f0 00 00 00 movl $0x1,0xf0(%rsp)
2e04: 01 00 00 00
2e08: 0f 44 44 24 6c cmove 0x6c(%rsp),%eax
2e0d: 4c 89 a4 24 08 01 00 mov %r12,0x108(%rsp)
2e14: 00
2e15: 89 44 24 6c mov %eax,0x6c(%rsp)
2e19: 89 84 24 f4 00 00 00 mov %eax,0xf4(%rsp)
2e20: 48 8b 44 24 70 mov 0x70(%rsp),%rax
2e25: 48 89 84 24 f8 00 00 mov %rax,0xf8(%rsp)
2e2c: 00
2e2d: 8b 44 24 6c mov 0x6c(%rsp),%eax
2e31: f7 d8 neg %eax
2e33: f6 84 24 49 01 00 00 testb $0x4,0x149(%rsp)
2e3a: 04
2e3b: 0f 44 44 24 6c cmove 0x6c(%rsp),%eax
2e40: 83 7c 24 48 08 cmpl $0x8,0x48(%rsp)
2e45: 48 63 d0 movslq %eax,%rdx
2e48: 75 0d jne 2e57 <x86_emulate_memop+0x2dca>
2e4a: 48 01 94 24 a8 00 00 add %rdx,0xa8(%rsp)
2e51: 00
2e52: e9 36 fc ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
2e57: 8b 4c 24 48 mov 0x48(%rsp),%ecx
2e5b: 48 8b b4 24 a8 00 00 mov 0xa8(%rsp),%rsi
2e62: 00
2e63: b8 01 00 00 00 mov $0x1,%eax
2e68: c1 e1 03 shl $0x3,%ecx
2e6b: 48 01 f2 add %rsi,%rdx
2e6e: 48 d3 e0 shl %cl,%rax
2e71: 48 8d 48 ff lea 0xffffffffffffffff(%rax),%rcx
2e75: 48 f7 d8 neg %rax
2e78: 48 21 f0 and %rsi,%rax
2e7b: 48 21 ca and %rcx,%rdx
2e7e: 48 09 c2 or %rax,%rdx
2e81: 48 89 94 24 a8 00 00 mov %rdx,0xa8(%rsp)
2e88: 00
2e89: e9 ff fb ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
2e8e: f6 44 24 18 01 testb $0x1,0x18(%rsp)
2e93: b8 01 00 00 00 mov $0x1,%eax
2e98: 48 8b 1c 24 mov (%rsp),%rbx
2e9c: 0f 44 44 24 6c cmove 0x6c(%rsp),%eax
2ea1: c7 84 24 f0 00 00 00 movl $0x0,0xf0(%rsp)
2ea8: 00 00 00 00
2eac: 48 8d b4 24 f8 00 00 lea 0xf8(%rsp),%rsi
2eb3: 00
2eb4: 4c 89 e9 mov %r13,%rcx
2eb7: 4c 89 e7 mov %r12,%rdi
2eba: 89 44 24 6c mov %eax,0x6c(%rsp)
2ebe: 89 84 24 f4 00 00 00 mov %eax,0xf4(%rsp)
2ec5: 48 8d 44 24 70 lea 0x70(%rsp),%rax
2eca: 8b 54 24 6c mov 0x6c(%rsp),%edx
2ece: 48 89 84 24 08 01 00 mov %rax,0x108(%rsp)
2ed5: 00
2ed6: ff 53 10 callq *0x10(%rbx)
2ed9: 85 c0 test %eax,%eax
2edb: 41 89 c7 mov %eax,%r15d
2ede: 0f 85 c0 fc ff ff jne 2ba4 <x86_emulate_memop+0x2b17>
2ee4: 44 8b 8c 24 f4 00 00 mov 0xf4(%rsp),%r9d
2eeb: 00
2eec: 44 89 c8 mov %r9d,%eax
2eef: f7 d8 neg %eax
2ef1: f6 84 24 49 01 00 00 testb $0x4,0x149(%rsp)
2ef8: 04
2ef9: 44 0f 45 c8 cmovne %eax,%r9d
2efd: 83 7c 24 48 08 cmpl $0x8,0x48(%rsp)
2f02: 49 63 d1 movslq %r9d,%rdx
2f05: 75 0d jne 2f14 <x86_emulate_memop+0x2e87>
2f07: 48 01 94 24 a0 00 00 add %rdx,0xa0(%rsp)
2f0e: 00
2f0f: e9 79 fb ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
2f14: 8b 4c 24 48 mov 0x48(%rsp),%ecx
2f18: 48 8b b4 24 a0 00 00 mov 0xa0(%rsp),%rsi
2f1f: 00
2f20: b8 01 00 00 00 mov $0x1,%eax
2f25: c1 e1 03 shl $0x3,%ecx
2f28: 48 01 f2 add %rsi,%rdx
2f2b: 48 d3 e0 shl %cl,%rax
2f2e: 48 8d 48 ff lea 0xffffffffffffffff(%rax),%rcx
2f32: 48 f7 d8 neg %rax
2f35: 48 21 f0 and %rsi,%rax
2f38: 48 21 ca and %rcx,%rdx
2f3b: 48 09 c2 or %rax,%rdx
2f3e: 48 89 94 24 a0 00 00 mov %rdx,0xa0(%rsp)
2f45: 00
2f46: e9 42 fb ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
2f4b: 49 8b 45 00 mov 0x0(%r13),%rax
2f4f: c7 80 28 0a 00 00 01 movl $0x1,0xa28(%rax)
2f56: 00 00 00
2f59: e9 4f fc ff ff jmpq 2bad <x86_emulate_memop+0x2b20>
2f5e: 48 8d 84 24 50 01 00 lea 0x150(%rsp),%rax
2f65: 00
2f66: eb 0b jmp 2f73 <x86_emulate_memop+0x2ee6>
2f68: 48 89 e8 mov %rbp,%rax
2f6b: 83 e0 07 and $0x7,%eax
2f6e: 48 8d 44 c4 70 lea 0x70(%rsp,%rax,8),%rax
2f73: 83 7c 24 48 08 cmpl $0x8,0x48(%rsp)
2f78: 48 8b 2c 24 mov (%rsp),%rbp
2f7c: 48 89 84 24 08 01 00 mov %rax,0x108(%rsp)
2f83: 00
2f84: 49 8b 55 38 mov 0x38(%r13),%rdx
2f88: 48 8b b4 24 08 01 00 mov 0x108(%rsp),%rsi
2f8f: 00
2f90: 4c 8b 45 00 mov 0x0(%rbp),%r8
2f94: 75 0a jne 2fa0 <x86_emulate_memop+0x2f13>
2f96: 48 8b 84 24 90 00 00 mov 0x90(%rsp),%rax
2f9d: 00
2f9e: eb 1a jmp 2fba <x86_emulate_memop+0x2f2d>
2fa0: 8b 4c 24 48 mov 0x48(%rsp),%ecx
2fa4: b8 01 00 00 00 mov $0x1,%eax
2fa9: c1 e1 03 shl $0x3,%ecx
2fac: 48 d3 e0 shl %cl,%rax
2faf: 48 ff c8 dec %rax
2fb2: 48 23 84 24 90 00 00 and 0x90(%rsp),%rax
2fb9: 00
2fba: 48 8d 3c 10 lea (%rax,%rdx,1),%rdi
2fbe: 4c 89 e9 mov %r13,%rcx
2fc1: 8b 54 24 6c mov 0x6c(%rsp),%edx
2fc5: 41 ff d0 callq *%r8
2fc8: 85 c0 test %eax,%eax
2fca: 41 89 c7 mov %eax,%r15d
2fcd: 0f 85 d1 fb ff ff jne 2ba4 <x86_emulate_memop+0x2b17>
2fd3: 83 7c 24 48 08 cmpl $0x8,0x48(%rsp)
2fd8: 48 63 54 24 6c movslq 0x6c(%rsp),%rdx
2fdd: 75 0d jne 2fec <x86_emulate_memop+0x2f5f>
2fdf: 48 01 94 24 90 00 00 add %rdx,0x90(%rsp)
2fe6: 00
2fe7: e9 82 fb ff ff jmpq 2b6e <x86_emulate_memop+0x2ae1>
2fec: 8b 4c 24 48 mov 0x48(%rsp),%ecx
2ff0: 48 8b b4 24 90 00 00 mov 0x90(%rsp),%rsi
2ff7: 00
2ff8: b8 01 00 00 00 mov $0x1,%eax
2ffd: c1 e1 03 shl $0x3,%ecx
3000: 48 01 f2 add %rsi,%rdx
3003: 48 d3 e0 shl %cl,%rax
3006: 48 8d 48 ff lea 0xffffffffffffffff(%rax),%rcx
300a: 48 f7 d8 neg %rax
300d: 48 21 f0 and %rsi,%rax
3010: 48 21 ca and %rcx,%rdx
3013: 48 09 c2 or %rax,%rdx
3016: 48 89 94 24 90 00 00 mov %rdx,0x90(%rsp)
301d: 00
301e: e9 4b fb ff ff jmpq 2b6e <x86_emulate_memop+0x2ae1>
3023: 40 80 fd b1 cmp $0xb1,%bpl
3027: 77 52 ja 307b <x86_emulate_memop+0x2fee>
3029: 40 80 fd b0 cmp $0xb0,%bpl
302d: 0f 83 e5 02 00 00 jae 3318 <x86_emulate_memop+0x328b>
3033: 40 80 fd 4f cmp $0x4f,%bpl
3037: 77 29 ja 3062 <x86_emulate_memop+0x2fd5>
3039: 40 80 fd 40 cmp $0x40,%bpl
303d: 0f 83 ea 01 00 00 jae 322d <x86_emulate_memop+0x31a0>
3043: 40 80 fd 21 cmp $0x21,%bpl
3047: 0f 84 8c 01 00 00 je 31d9 <x86_emulate_memop+0x314c>
304d: 40 80 fd 23 cmp $0x23,%bpl
3051: 0f 84 ac 01 00 00 je 3203 <x86_emulate_memop+0x3176>
3057: 40 fe cd dec %bpl
305a: 0f 85 2d fa ff ff jne 2a8d <x86_emulate_memop+0x2a00>
3060: eb 5f jmp 30c1 <x86_emulate_memop+0x3034>
3062: 40 80 fd a3 cmp $0xa3,%bpl
3066: 0f 84 37 04 00 00 je 34a3 <x86_emulate_memop+0x3416>
306c: 40 80 fd ab cmp $0xab,%bpl
3070: 0f 85 17 fa ff ff jne 2a8d <x86_emulate_memop+0x2a00>
3076: e9 3a 06 00 00 jmpq 36b5 <x86_emulate_memop+0x3628>
307b: 40 80 fd ba cmp $0xba,%bpl
307f: 0f 84 7d 08 00 00 je 3902 <x86_emulate_memop+0x3875>
3085: 77 20 ja 30a7 <x86_emulate_memop+0x301a>
3087: 40 80 fd b3 cmp $0xb3,%bpl
308b: 0f 84 21 05 00 00 je 35b2 <x86_emulate_memop+0x3525>
3091: 0f 82 f6 f9 ff ff jb 2a8d <x86_emulate_memop+0x2a00>
3097: 8d 45 4a lea 0x4a(%rbp),%eax
309a: 3c 01 cmp $0x1,%al
309c: 0f 87 eb f9 ff ff ja 2a8d <x86_emulate_memop+0x2a00>
30a2: e9 1d 07 00 00 jmpq 37c4 <x86_emulate_memop+0x3737>
30a7: 40 80 fd bb cmp $0xbb,%bpl
30ab: 0f 84 42 07 00 00 je 37f3 <x86_emulate_memop+0x3766>
30b1: 8d 45 42 lea 0x42(%rbp),%eax
30b4: 3c 01 cmp $0x1,%al
30b6: 0f 87 d1 f9 ff ff ja 2a8d <x86_emulate_memop+0x2a00>
30bc: e9 77 08 00 00 jmpq 3938 <x86_emulate_memop+0x38ab>
30c1: 8a 44 24 20 mov 0x20(%rsp),%al
30c5: 83 e8 02 sub $0x2,%eax
30c8: 3c 05 cmp $0x5,%al
30ca: 0f 87 81 0a 00 00 ja 3b51 <x86_emulate_memop+0x3ac4>
30d0: 0f b6 c0 movzbl %al,%eax
30d3: ff 24 c5 00 00 00 00 jmpq *0x0(,%rax,8)
30d6: R_X86_64_32S .rodata+0xc0
30da: 48 8b 94 24 28 01 00 mov 0x128(%rsp),%rdx
30e1: 00
30e2: 44 8b 4c 24 6c mov 0x6c(%rsp),%r9d
30e7: 48 8d 8c 24 38 01 00 lea 0x138(%rsp),%rcx
30ee: 00
30ef: 48 8b 34 24 mov (%rsp),%rsi
30f3: 4c 8d 84 24 40 01 00 lea 0x140(%rsp),%r8
30fa: 00
30fb: 4c 89 ef mov %r13,%rdi
30fe: e8 28 cf ff ff callq 2b <read_descriptor>
3103: 85 c0 test %eax,%eax
3105: 41 89 c7 mov %eax,%r15d
3108: 0f 85 96 fa ff ff jne 2ba4 <x86_emulate_memop+0x2b17>
310e: 0f b7 b4 24 38 01 00 movzwl 0x138(%rsp),%esi
3115: 00
3116: 48 8b 94 24 40 01 00 mov 0x140(%rsp),%rdx
311d: 00
311e: 49 8b 7d 00 mov 0x0(%r13),%rdi
3122: e8 00 00 00 00 callq 3127 <x86_emulate_memop+0x309a>
3123: R_X86_64_PC32 realmode_lgdt+0xfffffffffffffffc
3127: e9 61 f9 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
312c: 48 8b 94 24 28 01 00 mov 0x128(%rsp),%rdx
3133: 00
3134: 44 8b 4c 24 6c mov 0x6c(%rsp),%r9d
3139: 48 8d 8c 24 38 01 00 lea 0x138(%rsp),%rcx
3140: 00
3141: 48 8b 34 24 mov (%rsp),%rsi
3145: 4c 8d 84 24 40 01 00 lea 0x140(%rsp),%r8
314c: 00
314d: 4c 89 ef mov %r13,%rdi
3150: e8 d6 ce ff ff callq 2b <read_descriptor>
3155: 85 c0 test %eax,%eax
3157: 41 89 c7 mov %eax,%r15d
315a: 0f 85 44 fa ff ff jne 2ba4 <x86_emulate_memop+0x2b17>
3160: 0f b7 b4 24 38 01 00 movzwl 0x138(%rsp),%esi
3167: 00
3168: 48 8b 94 24 40 01 00 mov 0x140(%rsp),%rdx
316f: 00
3170: 49 8b 7d 00 mov 0x0(%r13),%rdi
3174: e8 00 00 00 00 callq 3179 <x86_emulate_memop+0x30ec>
3175: R_X86_64_PC32 realmode_lidt+0xfffffffffffffffc
3179: e9 0f f9 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
317e: 80 7c 24 1f 03 cmpb $0x3,0x1f(%rsp)
3183: 0f 85 c8 09 00 00 jne 3b51 <x86_emulate_memop+0x3ac4>
3189: 49 8b 7d 00 mov 0x0(%r13),%rdi
318d: 31 f6 xor %esi,%esi
318f: e8 00 00 00 00 callq 3194 <x86_emulate_memop+0x3107>
3190: R_X86_64_PC32 realmode_get_cr+0xfffffffffffffffc
3194: 0f b6 54 24 3f movzbl 0x3f(%rsp),%edx
3199: 66 89 44 d4 70 mov %ax,0x70(%rsp,%rdx,8)
319e: e9 ea f8 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
31a3: 80 7c 24 1f 03 cmpb $0x3,0x1f(%rsp)
31a8: 0f 85 a3 09 00 00 jne 3b51 <x86_emulate_memop+0x3ac4>
31ae: 49 8b 7d 00 mov 0x0(%r13),%rdi
31b2: 48 8d 94 24 48 01 00 lea 0x148(%rsp),%rdx
31b9: 00
31ba: 41 0f b7 f6 movzwl %r14w,%esi
31be: e8 00 00 00 00 callq 31c3 <x86_emulate_memop+0x3136>
31bf: R_X86_64_PC32 realmode_lmsw+0xfffffffffffffffc
31c3: e9 c5 f8 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
31c8: 49 8b 7d 00 mov 0x0(%r13),%rdi
31cc: 4c 89 e6 mov %r12,%rsi
31cf: e8 00 00 00 00 callq 31d4 <x86_emulate_memop+0x3147>
31d0: R_X86_64_PC32 emulate_invlpg+0xfffffffffffffffc
31d4: e9 b4 f8 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
31d9: 80 7c 24 1f 03 cmpb $0x3,0x1f(%rsp)
31de: 0f 85 6d 09 00 00 jne 3b51 <x86_emulate_memop+0x3ac4>
31e4: 0f b6 54 24 3f movzbl 0x3f(%rsp),%edx
31e9: 0f b6 74 24 20 movzbl 0x20(%rsp),%esi
31ee: 4c 89 ef mov %r13,%rdi
31f1: 48 8d 54 d4 70 lea 0x70(%rsp,%rdx,8),%rdx
31f6: e8 00 00 00 00 callq 31fb <x86_emulate_memop+0x316e>
31f7: R_X86_64_PC32 emulator_get_dr+0xfffffffffffffffc
31fb: 41 89 c7 mov %eax,%r15d
31fe: e9 8a f8 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
3203: 80 7c 24 1f 03 cmpb $0x3,0x1f(%rsp)
3208: 0f 85 43 09 00 00 jne 3b51 <x86_emulate_memop+0x3ac4>
320e: 0f b6 44 24 3f movzbl 0x3f(%rsp),%eax
3213: 0f b6 74 24 20 movzbl 0x20(%rsp),%esi
3218: 4c 89 ef mov %r13,%rdi
321b: 48 8b 54 c4 70 mov 0x70(%rsp,%rax,8),%rdx
3220: e8 00 00 00 00 callq 3225 <x86_emulate_memop+0x3198>
3221: R_X86_64_PC32 emulator_set_dr+0xfffffffffffffffc
3225: 41 89 c7 mov %eax,%r15d
3228: e9 60 f8 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
322d: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
3234: 00
3235: 40 0f b6 cd movzbl %bpl,%ecx
3239: ba 01 00 00 00 mov $0x1,%edx
323e: 48 89 84 24 00 01 00 mov %rax,0x100(%rsp)
3245: 00
3246: 48 89 84 24 f8 00 00 mov %rax,0xf8(%rsp)
324d: 00
324e: 89 c8 mov %ecx,%eax
3250: 83 e0 0f and $0xf,%eax
3253: d1 f8 sar %eax
3255: 83 f8 07 cmp $0x7,%eax
3258: 0f 87 24 f8 ff ff ja 2a82 <x86_emulate_memop+0x29f5>
325e: 89 c0 mov %eax,%eax
3260: ff 24 c5 00 00 00 00 jmpq *0x0(,%rax,8)
3263: R_X86_64_32S .rodata+0xf0
3267: be 01 00 00 00 mov $0x1,%esi
326c: e9 84 00 00 00 jmpq 32f5 <x86_emulate_memop+0x3268>
3271: 48 8b 84 24 48 01 00 mov 0x148(%rsp),%rax
3278: 00
3279: 80 f4 08 xor $0x8,%ah
327c: 48 c1 e8 0b shr $0xb,%rax
3280: eb 54 jmp 32d6 <x86_emulate_memop+0x3249>
3282: 8a 84 24 48 01 00 00 mov 0x148(%rsp),%al
3289: 83 e0 01 and $0x1,%eax
328c: 83 f0 01 xor $0x1,%eax
328f: eb 1d jmp 32ae <x86_emulate_memop+0x3221>
3291: 48 8b 84 24 48 01 00 mov 0x148(%rsp),%rax
3298: 00
3299: 48 83 f0 40 xor $0x40,%rax
329d: 48 c1 e8 06 shr $0x6,%rax
32a1: eb 33 jmp 32d6 <x86_emulate_memop+0x3249>
32a3: f6 84 24 48 01 00 00 testb $0x41,0x148(%rsp)
32aa: 41
32ab: 0f 94 c0 sete %al
32ae: 0f b6 d0 movzbl %al,%edx
32b1: e9 cc f7 ff ff jmpq 2a82 <x86_emulate_memop+0x29f5>
32b6: 48 8b 84 24 48 01 00 mov 0x148(%rsp),%rax
32bd: 00
32be: 34 80 xor $0x80,%al
32c0: 48 c1 e8 07 shr $0x7,%rax
32c4: eb 10 jmp 32d6 <x86_emulate_memop+0x3249>
32c6: 48 8b 84 24 48 01 00 mov 0x148(%rsp),%rax
32cd: 00
32ce: 48 83 f0 04 xor $0x4,%rax
32d2: 48 c1 e8 02 shr $0x2,%rax
32d6: 89 c2 mov %eax,%edx
32d8: 83 e2 01 and $0x1,%edx
32db: e9 a2 f7 ff ff jmpq 2a82 <x86_emulate_memop+0x29f5>
32e0: 48 8b 84 24 48 01 00 mov 0x148(%rsp),%rax
32e7: 00
32e8: 48 83 f0 40 xor $0x40,%rax
32ec: 48 c1 e8 06 shr $0x6,%rax
32f0: 89 c6 mov %eax,%esi
32f2: 83 e6 01 and $0x1,%esi
32f5: 48 8b 84 24 48 01 00 mov 0x148(%rsp),%rax
32fc: 00
32fd: 48 89 c2 mov %rax,%rdx
3300: 48 c1 e8 0b shr $0xb,%rax
3304: 48 c1 ea 07 shr $0x7,%rdx
3308: 48 83 f0 01 xor $0x1,%rax
330c: 48 31 d0 xor %rdx,%rax
330f: 89 f2 mov %esi,%edx
3311: 21 c2 and %eax,%edx
3313: e9 6a f7 ff ff jmpq 2a82 <x86_emulate_memop+0x29f5>
break;
case 0xb0 ... 0xb1: /* cmpxchg */
/*
* Save real source value, then compare EAX against
* destination.
*/
src.orig_val = src.val;
3318: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
331f: 00
src.val = _regs[VCPU_REGS_RAX];
emulate_2op_SrcV("cmp", src, dst, _eflags);
3320: 8b 94 24 f4 00 00 00 mov 0xf4(%rsp),%edx
3327: 48 89 84 24 20 01 00 mov %rax,0x120(%rsp)
332e: 00
332f: 48 8b 44 24 70 mov 0x70(%rsp),%rax
3334: 83 fa 01 cmp $0x1,%edx
3337: 48 89 84 24 18 01 00 mov %rax,0x118(%rsp)
333e: 00
333f: 75 4f jne 3390 <x86_emulate_memop+0x3303>
3341: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
3348: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
334e: 44 21 04 24 and %r8d,(%rsp)
3352: 9c pushfq
3353: 41 f7 d0 not %r8d
3356: 44 21 04 24 and %r8d,(%rsp)
335a: 41 58 pop %r8
335c: 44 09 04 24 or %r8d,(%rsp)
3360: 9d popfq
3361: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
3367: 41 f7 d0 not %r8d
336a: 44 21 84 24 48 01 00 and %r8d,0x148(%rsp)
3371: 00
3372: 38 84 24 f8 00 00 00 cmp %al,0xf8(%rsp)
3379: 9c pushfq
337a: 41 58 pop %r8
337c: 41 81 e0 d5 08 00 00 and $0x8d5,%r8d
3383: 44 09 84 24 48 01 00 or %r8d,0x148(%rsp)
338a: 00
338b: e9 d7 00 00 00 jmpq 3467 <x86_emulate_memop+0x33da>
3390: 83 fa 04 cmp $0x4,%edx
3393: 74 53 je 33e8 <x86_emulate_memop+0x335b>
3395: 83 fa 08 cmp $0x8,%edx
3398: 0f 84 8a 00 00 00 je 3428 <x86_emulate_memop+0x339b>
339e: 83 fa 02 cmp $0x2,%edx
33a1: 0f 85 c0 00 00 00 jne 3467 <x86_emulate_memop+0x33da>
33a7: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
33ae: ba d5 08 00 00 mov $0x8d5,%edx
33b3: 21 14 24 and %edx,(%rsp)
33b6: 9c pushfq
33b7: f7 d2 not %edx
33b9: 21 14 24 and %edx,(%rsp)
33bc: 5a pop %rdx
33bd: 09 14 24 or %edx,(%rsp)
33c0: 9d popfq
33c1: ba d5 08 00 00 mov $0x8d5,%edx
33c6: f7 d2 not %edx
33c8: 21 94 24 48 01 00 00 and %edx,0x148(%rsp)
33cf: 66 39 84 24 f8 00 00 cmp %ax,0xf8(%rsp)
33d6: 00
33d7: 9c pushfq
33d8: 5a pop %rdx
33d9: 81 e2 d5 08 00 00 and $0x8d5,%edx
33df: 09 94 24 48 01 00 00 or %edx,0x148(%rsp)
33e6: eb 7f jmp 3467 <x86_emulate_memop+0x33da>
33e8: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
33ef: b9 d5 08 00 00 mov $0x8d5,%ecx
33f4: 21 0c 24 and %ecx,(%rsp)
33f7: 9c pushfq
33f8: f7 d1 not %ecx
33fa: 21 0c 24 and %ecx,(%rsp)
33fd: 59 pop %rcx
33fe: 09 0c 24 or %ecx,(%rsp)
3401: 9d popfq
3402: b9 d5 08 00 00 mov $0x8d5,%ecx
3407: f7 d1 not %ecx
3409: 21 8c 24 48 01 00 00 and %ecx,0x148(%rsp)
3410: 39 84 24 f8 00 00 00 cmp %eax,0xf8(%rsp)
3417: 9c pushfq
3418: 59 pop %rcx
3419: 81 e1 d5 08 00 00 and $0x8d5,%ecx
341f: 09 8c 24 48 01 00 00 or %ecx,0x148(%rsp)
3426: eb 3f jmp 3467 <x86_emulate_memop+0x33da>
3428: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
342f: bb d5 08 00 00 mov $0x8d5,%ebx
3434: 21 1c 24 and %ebx,(%rsp)
3437: 9c pushfq
3438: f7 d3 not %ebx
343a: 21 1c 24 and %ebx,(%rsp)
343d: 5b pop %rbx
343e: 09 1c 24 or %ebx,(%rsp)
3441: 9d popfq
3442: bb d5 08 00 00 mov $0x8d5,%ebx
3447: f7 d3 not %ebx
3449: 21 9c 24 48 01 00 00 and %ebx,0x148(%rsp)
3450: 48 39 84 24 f8 00 00 cmp %rax,0xf8(%rsp)
3457: 00
3458: 9c pushfq
3459: 5b pop %rbx
345a: 81 e3 d5 08 00 00 and $0x8d5,%ebx
3460: 09 9c 24 48 01 00 00 or %ebx,0x148(%rsp)
/* Always write back. The question is: where to? */
d |= Mov;
if (_eflags & EFLG_ZF) {
3467: f6 84 24 48 01 00 00 testb $0x40,0x148(%rsp)
346e: 40
346f: 74 15 je 3486 <x86_emulate_memop+0x33f9>
/* Success: write back to memory. */
dst.val = src.orig_val;
3471: 48 8b 84 24 20 01 00 mov 0x120(%rsp),%rax
3478: 00
3479: 48 89 84 24 f8 00 00 mov %rax,0xf8(%rsp)
3480: 00
3481: e9 07 f6 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
} else {
/* Failure: write the value we saw to EAX. */
dst.type = OP_REG;
dst.ptr = (unsigned long *)&_regs[VCPU_REGS_RAX];
3486: 48 8d 44 24 70 lea 0x70(%rsp),%rax
348b: c7 84 24 f0 00 00 00 movl $0x0,0xf0(%rsp)
3492: 00 00 00 00
3496: 48 89 84 24 08 01 00 mov %rax,0x108(%rsp)
349d: 00
349e: e9 ea f5 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
}
break;
case 0xa3:
bt: /* bt */
src.val &= (dst.bytes << 3) - 1; /* only subword offset */
34a3: 8b 94 24 f4 00 00 00 mov 0xf4(%rsp),%edx
34aa: 8d 04 d5 ff ff ff ff lea 0xffffffffffffffff(,%rdx,8),%eax
34b1: 48 23 84 24 18 01 00 and 0x118(%rsp),%rax
34b8: 00
emulate_2op_SrcV_nobyte("bt", src, dst, _eflags);
34b9: 83 fa 04 cmp $0x4,%edx
34bc: 48 89 84 24 18 01 00 mov %rax,0x118(%rsp)
34c3: 00
34c4: 74 57 je 351d <x86_emulate_memop+0x3490>
34c6: 83 fa 08 cmp $0x8,%edx
34c9: 0f 84 9e 00 00 00 je 356d <x86_emulate_memop+0x34e0>
34cf: 83 fa 02 cmp $0x2,%edx
34d2: 0f 85 b5 f5 ff ff jne 2a8d <x86_emulate_memop+0x2a00>
34d8: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
34df: bd d5 08 00 00 mov $0x8d5,%ebp
34e4: 21 2c 24 and %ebp,(%rsp)
34e7: 9c pushfq
34e8: f7 d5 not %ebp
34ea: 21 2c 24 and %ebp,(%rsp)
34ed: 5d pop %rbp
34ee: 09 2c 24 or %ebp,(%rsp)
34f1: 9d popfq
34f2: bd d5 08 00 00 mov $0x8d5,%ebp
34f7: f7 d5 not %ebp
34f9: 21 ac 24 48 01 00 00 and %ebp,0x148(%rsp)
3500: 66 0f a3 84 24 f8 00 bt %ax,0xf8(%rsp)
3507: 00 00
3509: 9c pushfq
350a: 5d pop %rbp
350b: 81 e5 d5 08 00 00 and $0x8d5,%ebp
3511: 09 ac 24 48 01 00 00 or %ebp,0x148(%rsp)
3518: e9 70 f5 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
351d: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
3524: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
352a: 44 21 04 24 and %r8d,(%rsp)
352e: 9c pushfq
352f: 41 f7 d0 not %r8d
3532: 44 21 04 24 and %r8d,(%rsp)
3536: 41 58 pop %r8
3538: 44 09 04 24 or %r8d,(%rsp)
353c: 9d popfq
353d: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
3543: 41 f7 d0 not %r8d
3546: 44 21 84 24 48 01 00 and %r8d,0x148(%rsp)
354d: 00
354e: 0f a3 84 24 f8 00 00 bt %eax,0xf8(%rsp)
3555: 00
3556: 9c pushfq
3557: 41 58 pop %r8
3559: 41 81 e0 d5 08 00 00 and $0x8d5,%r8d
3560: 44 09 84 24 48 01 00 or %r8d,0x148(%rsp)
3567: 00
3568: e9 20 f5 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
356d: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
3574: ba d5 08 00 00 mov $0x8d5,%edx
3579: 21 14 24 and %edx,(%rsp)
357c: 9c pushfq
357d: f7 d2 not %edx
357f: 21 14 24 and %edx,(%rsp)
3582: 5a pop %rdx
3583: 09 14 24 or %edx,(%rsp)
3586: 9d popfq
3587: ba d5 08 00 00 mov $0x8d5,%edx
358c: f7 d2 not %edx
358e: 21 94 24 48 01 00 00 and %edx,0x148(%rsp)
3595: 48 0f a3 84 24 f8 00 bt %rax,0xf8(%rsp)
359c: 00 00
359e: 9c pushfq
359f: 5a pop %rdx
35a0: 81 e2 d5 08 00 00 and $0x8d5,%edx
35a6: 09 94 24 48 01 00 00 or %edx,0x148(%rsp)
35ad: e9 db f4 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0xb3:
btr: /* btr */
src.val &= (dst.bytes << 3) - 1; /* only subword offset */
35b2: 8b 94 24 f4 00 00 00 mov 0xf4(%rsp),%edx
35b9: 8d 04 d5 ff ff ff ff lea 0xffffffffffffffff(,%rdx,8),%eax
35c0: 48 23 84 24 18 01 00 and 0x118(%rsp),%rax
35c7: 00
emulate_2op_SrcV_nobyte("btr", src, dst, _eflags);
35c8: 83 fa 04 cmp $0x4,%edx
35cb: 48 89 84 24 18 01 00 mov %rax,0x118(%rsp)
35d2: 00
35d3: 74 57 je 362c <x86_emulate_memop+0x359f>
35d5: 83 fa 08 cmp $0x8,%edx
35d8: 0f 84 92 00 00 00 je 3670 <x86_emulate_memop+0x35e3>
35de: 83 fa 02 cmp $0x2,%edx
35e1: 0f 85 a6 f4 ff ff jne 2a8d <x86_emulate_memop+0x2a00>
35e7: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
35ee: b9 d5 08 00 00 mov $0x8d5,%ecx
35f3: 21 0c 24 and %ecx,(%rsp)
35f6: 9c pushfq
35f7: f7 d1 not %ecx
35f9: 21 0c 24 and %ecx,(%rsp)
35fc: 59 pop %rcx
35fd: 09 0c 24 or %ecx,(%rsp)
3600: 9d popfq
3601: b9 d5 08 00 00 mov $0x8d5,%ecx
3606: f7 d1 not %ecx
3608: 21 8c 24 48 01 00 00 and %ecx,0x148(%rsp)
360f: 66 0f b3 84 24 f8 00 btr %ax,0xf8(%rsp)
3616: 00 00
3618: 9c pushfq
3619: 59 pop %rcx
361a: 81 e1 d5 08 00 00 and $0x8d5,%ecx
3620: 09 8c 24 48 01 00 00 or %ecx,0x148(%rsp)
3627: e9 61 f4 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
362c: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
3633: bb d5 08 00 00 mov $0x8d5,%ebx
3638: 21 1c 24 and %ebx,(%rsp)
363b: 9c pushfq
363c: f7 d3 not %ebx
363e: 21 1c 24 and %ebx,(%rsp)
3641: 5b pop %rbx
3642: 09 1c 24 or %ebx,(%rsp)
3645: 9d popfq
3646: bb d5 08 00 00 mov $0x8d5,%ebx
364b: f7 d3 not %ebx
364d: 21 9c 24 48 01 00 00 and %ebx,0x148(%rsp)
3654: 0f b3 84 24 f8 00 00 btr %eax,0xf8(%rsp)
365b: 00
365c: 9c pushfq
365d: 5b pop %rbx
365e: 81 e3 d5 08 00 00 and $0x8d5,%ebx
3664: 09 9c 24 48 01 00 00 or %ebx,0x148(%rsp)
366b: e9 1d f4 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
3670: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
3677: bd d5 08 00 00 mov $0x8d5,%ebp
367c: 21 2c 24 and %ebp,(%rsp)
367f: 9c pushfq
3680: f7 d5 not %ebp
3682: 21 2c 24 and %ebp,(%rsp)
3685: 5d pop %rbp
3686: 09 2c 24 or %ebp,(%rsp)
3689: 9d popfq
368a: bd d5 08 00 00 mov $0x8d5,%ebp
368f: f7 d5 not %ebp
3691: 21 ac 24 48 01 00 00 and %ebp,0x148(%rsp)
3698: 48 0f b3 84 24 f8 00 btr %rax,0xf8(%rsp)
369f: 00 00
36a1: 9c pushfq
36a2: 5d pop %rbp
36a3: 81 e5 d5 08 00 00 and $0x8d5,%ebp
36a9: 09 ac 24 48 01 00 00 or %ebp,0x148(%rsp)
36b0: e9 d8 f3 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0xab:
bts: /* bts */
src.val &= (dst.bytes << 3) - 1; /* only subword offset */
36b5: 8b 94 24 f4 00 00 00 mov 0xf4(%rsp),%edx
36bc: 8d 04 d5 ff ff ff ff lea 0xffffffffffffffff(,%rdx,8),%eax
36c3: 48 23 84 24 18 01 00 and 0x118(%rsp),%rax
36ca: 00
emulate_2op_SrcV_nobyte("bts", src, dst, _eflags);
36cb: 83 fa 04 cmp $0x4,%edx
36ce: 48 89 84 24 18 01 00 mov %rax,0x118(%rsp)
36d5: 00
36d6: 74 63 je 373b <x86_emulate_memop+0x36ae>
36d8: 83 fa 08 cmp $0x8,%edx
36db: 0f 84 9e 00 00 00 je 377f <x86_emulate_memop+0x36f2>
36e1: 83 fa 02 cmp $0x2,%edx
36e4: 0f 85 a3 f3 ff ff jne 2a8d <x86_emulate_memop+0x2a00>
36ea: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
36f1: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
36f7: 44 21 04 24 and %r8d,(%rsp)
36fb: 9c pushfq
36fc: 41 f7 d0 not %r8d
36ff: 44 21 04 24 and %r8d,(%rsp)
3703: 41 58 pop %r8
3705: 44 09 04 24 or %r8d,(%rsp)
3709: 9d popfq
370a: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
3710: 41 f7 d0 not %r8d
3713: 44 21 84 24 48 01 00 and %r8d,0x148(%rsp)
371a: 00
371b: 66 0f ab 84 24 f8 00 bts %ax,0xf8(%rsp)
3722: 00 00
3724: 9c pushfq
3725: 41 58 pop %r8
3727: 41 81 e0 d5 08 00 00 and $0x8d5,%r8d
372e: 44 09 84 24 48 01 00 or %r8d,0x148(%rsp)
3735: 00
3736: e9 52 f3 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
373b: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
3742: ba d5 08 00 00 mov $0x8d5,%edx
3747: 21 14 24 and %edx,(%rsp)
374a: 9c pushfq
374b: f7 d2 not %edx
374d: 21 14 24 and %edx,(%rsp)
3750: 5a pop %rdx
3751: 09 14 24 or %edx,(%rsp)
3754: 9d popfq
3755: ba d5 08 00 00 mov $0x8d5,%edx
375a: f7 d2 not %edx
375c: 21 94 24 48 01 00 00 and %edx,0x148(%rsp)
3763: 0f ab 84 24 f8 00 00 bts %eax,0xf8(%rsp)
376a: 00
376b: 9c pushfq
376c: 5a pop %rdx
376d: 81 e2 d5 08 00 00 and $0x8d5,%edx
3773: 09 94 24 48 01 00 00 or %edx,0x148(%rsp)
377a: e9 0e f3 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
377f: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
3786: b9 d5 08 00 00 mov $0x8d5,%ecx
378b: 21 0c 24 and %ecx,(%rsp)
378e: 9c pushfq
378f: f7 d1 not %ecx
3791: 21 0c 24 and %ecx,(%rsp)
3794: 59 pop %rcx
3795: 09 0c 24 or %ecx,(%rsp)
3798: 9d popfq
3799: b9 d5 08 00 00 mov $0x8d5,%ecx
379e: f7 d1 not %ecx
37a0: 21 8c 24 48 01 00 00 and %ecx,0x148(%rsp)
37a7: 48 0f ab 84 24 f8 00 bts %rax,0xf8(%rsp)
37ae: 00 00
37b0: 9c pushfq
37b1: 59 pop %rcx
37b2: 81 e1 d5 08 00 00 and $0x8d5,%ecx
37b8: 09 8c 24 48 01 00 00 or %ecx,0x148(%rsp)
37bf: e9 c9 f2 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0xb6 ... 0xb7: /* movzx */
dst.bytes = op_bytes;
dst.val = (d & ByteOp) ? (u8) src.val : (u16) src.val;
37c4: f6 44 24 18 01 testb $0x1,0x18(%rsp)
37c9: 8b 5c 24 6c mov 0x6c(%rsp),%ebx
37cd: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
37d4: 00
37d5: 89 9c 24 f4 00 00 00 mov %ebx,0xf4(%rsp)
37dc: 74 05 je 37e3 <x86_emulate_memop+0x3756>
37de: 0f b6 c0 movzbl %al,%eax
37e1: eb 03 jmp 37e6 <x86_emulate_memop+0x3759>
37e3: 0f b7 c0 movzwl %ax,%eax
37e6: 48 89 84 24 f8 00 00 mov %rax,0xf8(%rsp)
37ed: 00
37ee: e9 9a f2 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0xbb:
btc: /* btc */
src.val &= (dst.bytes << 3) - 1; /* only subword offset */
37f3: 8b 94 24 f4 00 00 00 mov 0xf4(%rsp),%edx
37fa: 8d 04 d5 ff ff ff ff lea 0xffffffffffffffff(,%rdx,8),%eax
3801: 48 23 84 24 18 01 00 and 0x118(%rsp),%rax
3808: 00
emulate_2op_SrcV_nobyte("btc", src, dst, _eflags);
3809: 83 fa 04 cmp $0x4,%edx
380c: 48 89 84 24 18 01 00 mov %rax,0x118(%rsp)
3813: 00
3814: 74 57 je 386d <x86_emulate_memop+0x37e0>
3816: 83 fa 08 cmp $0x8,%edx
3819: 0f 84 9e 00 00 00 je 38bd <x86_emulate_memop+0x3830>
381f: 83 fa 02 cmp $0x2,%edx
3822: 0f 85 65 f2 ff ff jne 2a8d <x86_emulate_memop+0x2a00>
3828: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
382f: bd d5 08 00 00 mov $0x8d5,%ebp
3834: 21 2c 24 and %ebp,(%rsp)
3837: 9c pushfq
3838: f7 d5 not %ebp
383a: 21 2c 24 and %ebp,(%rsp)
383d: 5d pop %rbp
383e: 09 2c 24 or %ebp,(%rsp)
3841: 9d popfq
3842: bd d5 08 00 00 mov $0x8d5,%ebp
3847: f7 d5 not %ebp
3849: 21 ac 24 48 01 00 00 and %ebp,0x148(%rsp)
3850: 66 0f bb 84 24 f8 00 btc %ax,0xf8(%rsp)
3857: 00 00
3859: 9c pushfq
385a: 5d pop %rbp
385b: 81 e5 d5 08 00 00 and $0x8d5,%ebp
3861: 09 ac 24 48 01 00 00 or %ebp,0x148(%rsp)
3868: e9 20 f2 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
386d: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
3874: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
387a: 44 21 04 24 and %r8d,(%rsp)
387e: 9c pushfq
387f: 41 f7 d0 not %r8d
3882: 44 21 04 24 and %r8d,(%rsp)
3886: 41 58 pop %r8
3888: 44 09 04 24 or %r8d,(%rsp)
388c: 9d popfq
388d: 41 b8 d5 08 00 00 mov $0x8d5,%r8d
3893: 41 f7 d0 not %r8d
3896: 44 21 84 24 48 01 00 and %r8d,0x148(%rsp)
389d: 00
389e: 0f bb 84 24 f8 00 00 btc %eax,0xf8(%rsp)
38a5: 00
38a6: 9c pushfq
38a7: 41 58 pop %r8
38a9: 41 81 e0 d5 08 00 00 and $0x8d5,%r8d
38b0: 44 09 84 24 48 01 00 or %r8d,0x148(%rsp)
38b7: 00
38b8: e9 d0 f1 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
38bd: ff b4 24 48 01 00 00 pushq 0x148(%rsp)
38c4: ba d5 08 00 00 mov $0x8d5,%edx
38c9: 21 14 24 and %edx,(%rsp)
38cc: 9c pushfq
38cd: f7 d2 not %edx
38cf: 21 14 24 and %edx,(%rsp)
38d2: 5a pop %rdx
38d3: 09 14 24 or %edx,(%rsp)
38d6: 9d popfq
38d7: ba d5 08 00 00 mov $0x8d5,%edx
38dc: f7 d2 not %edx
38de: 21 94 24 48 01 00 00 and %edx,0x148(%rsp)
38e5: 48 0f bb 84 24 f8 00 btc %rax,0xf8(%rsp)
38ec: 00 00
38ee: 9c pushfq
38ef: 5a pop %rdx
38f0: 81 e2 d5 08 00 00 and $0x8d5,%edx
38f6: 09 94 24 48 01 00 00 or %edx,0x148(%rsp)
38fd: e9 8b f1 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
case 0xba: /* Grp8 */
switch (modrm_reg & 3) {
3902: 8a 44 24 20 mov 0x20(%rsp),%al
3906: 83 e0 03 and $0x3,%eax
3909: 83 f8 01 cmp $0x1,%eax
390c: 0f 84 a3 fd ff ff je 36b5 <x86_emulate_memop+0x3628>
3912: 7f 0d jg 3921 <x86_emulate_memop+0x3894>
3914: 85 c0 test %eax,%eax
3916: 0f 84 87 fb ff ff je 34a3 <x86_emulate_memop+0x3416>
391c: e9 6c f1 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
3921: 83 f8 02 cmp $0x2,%eax
3924: 0f 84 88 fc ff ff je 35b2 <x86_emulate_memop+0x3525>
392a: 83 f8 03 cmp $0x3,%eax
392d: 0f 85 5a f1 ff ff jne 2a8d <x86_emulate_memop+0x2a00>
3933: e9 bb fe ff ff jmpq 37f3 <x86_emulate_memop+0x3766>
case 0:
goto bt;
case 1:
goto bts;
case 2:
goto btr;
case 3:
goto btc;
}
break;
case 0xbe ... 0xbf: /* movsx */
dst.bytes = op_bytes;
dst.val = (d & ByteOp) ? (s8) src.val : (s16) src.val;
3938: f6 44 24 18 01 testb $0x1,0x18(%rsp)
393d: 8b 4c 24 6c mov 0x6c(%rsp),%ecx
3941: 48 8b 84 24 18 01 00 mov 0x118(%rsp),%rax
3948: 00
3949: 89 8c 24 f4 00 00 00 mov %ecx,0xf4(%rsp)
3950: 74 06 je 3958 <x86_emulate_memop+0x38cb>
3952: 48 0f be c0 movsbq %al,%rax
3956: eb 04 jmp 395c <x86_emulate_memop+0x38cf>
3958: 48 0f bf c0 movswq %ax,%rax
395c: 48 89 84 24 f8 00 00 mov %rax,0xf8(%rsp)
3963: 00
3964: e9 24 f1 ff ff jmpq 2a8d <x86_emulate_memop+0x2a00>
break;
}
goto writeback;
twobyte_special_insn:
/* Disable writeback. */
no_wb = 1;
switch (b) {
3969: 40 80 fd 22 cmp $0x22,%bpl
396d: 74 69 je 39d8 <x86_emulate_memop+0x394b>
396f: 77 12 ja 3983 <x86_emulate_memop+0x38f6>
3971: 40 80 fd 06 cmp $0x6,%bpl
3975: 74 2b je 39a2 <x86_emulate_memop+0x3915>
3977: 40 80 fd 20 cmp $0x20,%bpl
397b: 0f 85 ed f1 ff ff jne 2b6e <x86_emulate_memop+0x2ae1>
3981: eb 2d jmp 39b0 <x86_emulate_memop+0x3923>
3983: 40 80 fd 32 cmp $0x32,%bpl
3987: 0f 84 a6 00 00 00 je 3a33 <x86_emulate_memop+0x39a6>
398d: 40 80 fd c7 cmp $0xc7,%bpl
3991: 0f 84 00 01 00 00 je 3a97 <x86_emulate_memop+0x3a0a>
3997: 40 80 fd 30 cmp $0x30,%bpl
399b: 74 64 je 3a01 <x86_emulate_memop+0x3974>
399d: e9 cc f1 ff ff jmpq 2b6e <x86_emulate_memop+0x2ae1>
case 0x09: /* wbinvd */
break;
case 0x0d: /* GrpP (prefetch) */
case 0x18: /* Grp16 (prefetch/nop) */
break;
case 0x06:
emulate_clts(ctxt->vcpu);
39a2: 49 8b 7d 00 mov 0x0(%r13),%rdi
39a6: e8 00 00 00 00 callq 39ab <x86_emulate_memop+0x391e>
39a7: R_X86_64_PC32 emulate_clts+0xfffffffffffffffc
39ab: e9 be f1 ff ff jmpq 2b6e <x86_emulate_memop+0x2ae1>
break;
case 0x20: /* mov cr, reg */
if (modrm_mod != 3)
39b0: 80 7c 24 1f 03 cmpb $0x3,0x1f(%rsp)
39b5: 0f 85 96 01 00 00 jne 3b51 <x86_emulate_memop+0x3ac4>
goto cannot_emulate;
_regs[modrm_rm] = realmode_get_cr(ctxt->vcpu, modrm_reg);
39bb: 0f b6 74 24 20 movzbl 0x20(%rsp),%esi
39c0: 49 8b 7d 00 mov 0x0(%r13),%rdi
39c4: e8 00 00 00 00 callq 39c9 <x86_emulate_memop+0x393c>
39c5: R_X86_64_PC32 realmode_get_cr+0xfffffffffffffffc
39c9: 0f b6 54 24 3f movzbl 0x3f(%rsp),%edx
39ce: 48 89 44 d4 70 mov %rax,0x70(%rsp,%rdx,8)
39d3: e9 96 f1 ff ff jmpq 2b6e <x86_emulate_memop+0x2ae1>
break;
case 0x22: /* mov reg, cr */
if (modrm_mod != 3)
39d8: 80 7c 24 1f 03 cmpb $0x3,0x1f(%rsp)
39dd: 0f 85 6e 01 00 00 jne 3b51 <x86_emulate_memop+0x3ac4>
goto cannot_emulate;
realmode_set_cr(ctxt->vcpu, modrm_reg, modrm_val, &_eflags);
39e3: 0f b6 74 24 20 movzbl 0x20(%rsp),%esi
39e8: 49 8b 7d 00 mov 0x0(%r13),%rdi
39ec: 48 8d 8c 24 48 01 00 lea 0x148(%rsp),%rcx
39f3: 00
39f4: 4c 89 f2 mov %r14,%rdx
39f7: e8 00 00 00 00 callq 39fc <x86_emulate_memop+0x396f>
39f8: R_X86_64_PC32 realmode_set_cr+0xfffffffffffffffc
39fc: e9 6d f1 ff ff jmpq 2b6e <x86_emulate_memop+0x2ae1>
break;
case 0x30:
/* wrmsr */
msr_data = (u32)_regs[VCPU_REGS_RAX]
3a01: 8b 54 24 70 mov 0x70(%rsp),%edx
3a05: 48 8b 84 24 80 00 00 mov 0x80(%rsp),%rax
3a0c: 00
| ((u64)_regs[VCPU_REGS_RDX] << 32);
rc = kvm_set_msr(ctxt->vcpu, _regs[VCPU_REGS_RCX], msr_data);
3a0d: 8b 74 24 78 mov 0x78(%rsp),%esi
3a11: 49 8b 7d 00 mov 0x0(%r13),%rdi
3a15: 48 c1 e0 20 shl $0x20,%rax
3a19: 48 09 c2 or %rax,%rdx
3a1c: 48 89 94 24 58 01 00 mov %rdx,0x158(%rsp)
3a23: 00
3a24: e8 00 00 00 00 callq 3a29 <x86_emulate_memop+0x399c>
3a25: R_X86_64_PC32 kvm_set_msr+0xfffffffffffffffc
if (rc) {
3a29: 85 c0 test %eax,%eax
3a2b: 0f 84 45 01 00 00 je 3b76 <x86_emulate_memop+0x3ae9>
3a31: eb 19 jmp 3a4c <x86_emulate_memop+0x39bf>
kvm_arch_ops->inject_gp(ctxt->vcpu, 0);
_eip = ctxt->vcpu->rip;
}
rc = X86EMUL_CONTINUE;
break;
case 0x32:
/* rdmsr */
rc = kvm_get_msr(ctxt->vcpu, _regs[VCPU_REGS_RCX], &msr_data);
3a33: 8b 74 24 78 mov 0x78(%rsp),%esi
3a37: 49 8b 7d 00 mov 0x0(%r13),%rdi
3a3b: 48 8d 94 24 58 01 00 lea 0x158(%rsp),%rdx
3a42: 00
3a43: e8 00 00 00 00 callq 3a48 <x86_emulate_memop+0x39bb>
3a44: R_X86_64_PC32 kvm_get_msr+0xfffffffffffffffc
if (rc) {
3a48: 85 c0 test %eax,%eax
3a4a: 74 2b je 3a77 <x86_emulate_memop+0x39ea>
kvm_arch_ops->inject_gp(ctxt->vcpu, 0);
3a4c: 48 8b 05 00 00 00 00 mov 0(%rip),%rax # 3a53 <x86_emulate_memop+0x39c6>
3a4f: R_X86_64_PC32 kvm_arch_ops+0xfffffffffffffffc
3a53: 31 f6 xor %esi,%esi
3a55: 49 8b 7d 00 mov 0x0(%r13),%rdi
3a59: ff 90 20 01 00 00 callq *0x120(%rax)
_eip = ctxt->vcpu->rip;
3a5f: 49 8b 45 00 mov 0x0(%r13),%rax
3a63: 48 8b 80 00 01 00 00 mov 0x100(%rax),%rax
3a6a: 48 89 84 24 50 01 00 mov %rax,0x150(%rsp)
3a71: 00
3a72: e9 f7 f0 ff ff jmpq 2b6e <x86_emulate_memop+0x2ae1>
} else {
_regs[VCPU_REGS_RAX] = (u32)msr_data;
3a77: 48 8b 84 24 58 01 00 mov 0x158(%rsp),%rax
3a7e: 00
3a7f: 89 c3 mov %eax,%ebx
_regs[VCPU_REGS_RDX] = msr_data >> 32;
3a81: 48 c1 e8 20 shr $0x20,%rax
3a85: 48 89 5c 24 70 mov %rbx,0x70(%rsp)
3a8a: 48 89 84 24 80 00 00 mov %rax,0x80(%rsp)
3a91: 00
3a92: e9 df 00 00 00 jmpq 3b76 <x86_emulate_memop+0x3ae9>
}
rc = X86EMUL_CONTINUE;
break;
case 0xc7: /* Grp9 (cmpxchg8b) */
{
u64 old, new;
if ((rc = ops->read_emulated(cr2, &old, 8, ctxt)) != 0)
3a97: 48 8b 2c 24 mov (%rsp),%rbp
3a9b: 48 8d 9c 24 40 01 00 lea 0x140(%rsp),%rbx
3aa2: 00
3aa3: 4c 89 e9 mov %r13,%rcx
3aa6: ba 08 00 00 00 mov $0x8,%edx
3aab: 4c 89 e7 mov %r12,%rdi
3aae: 48 89 de mov %rbx,%rsi
3ab1: ff 55 10 callq *0x10(%rbp)
3ab4: 85 c0 test %eax,%eax
3ab6: 41 89 c7 mov %eax,%r15d
3ab9: 0f 85 e5 f0 ff ff jne 2ba4 <x86_emulate_memop+0x2b17>
goto done;
if (((u32) (old >> 0) != (u32) _regs[VCPU_REGS_RAX]) ||
3abf: 48 8b 94 24 40 01 00 mov 0x140(%rsp),%rdx
3ac6: 00
3ac7: 3b 54 24 70 cmp 0x70(%rsp),%edx
3acb: 89 d1 mov %edx,%ecx
3acd: 75 10 jne 3adf <x86_emulate_memop+0x3a52>
3acf: 48 89 d0 mov %rdx,%rax
3ad2: 48 c1 e8 20 shr $0x20,%rax
3ad6: 39 84 24 80 00 00 00 cmp %eax,0x80(%rsp)
3add: 74 21 je 3b00 <x86_emulate_memop+0x3a73>
((u32) (old >> 32) != (u32) _regs[VCPU_REGS_RDX])) {
_regs[VCPU_REGS_RAX] = (u32) (old >> 0);
_regs[VCPU_REGS_RDX] = (u32) (old >> 32);
_eflags &= ~EFLG_ZF;
3adf: 48 83 a4 24 48 01 00 andq $0xffffffffffffffbf,0x148(%rsp)
3ae6: 00 bf
3ae8: 89 c9 mov %ecx,%ecx
3aea: 48 c1 ea 20 shr $0x20,%rdx
3aee: 48 89 4c 24 70 mov %rcx,0x70(%rsp)
3af3: 48 89 94 24 80 00 00 mov %rdx,0x80(%rsp)
3afa: 00
3afb: e9 6e f0 ff ff jmpq 2b6e <x86_emulate_memop+0x2ae1>
} else {
new = ((u64)_regs[VCPU_REGS_RCX] << 32)
3b00: 8b 84 24 88 00 00 00 mov 0x88(%rsp),%eax
3b07: 48 8b 54 24 78 mov 0x78(%rsp),%rdx
| (u32) _regs[VCPU_REGS_RBX];
if ((rc = ops->cmpxchg_emulated(cr2, &old,
3b0c: 48 89 de mov %rbx,%rsi
3b0f: 48 8b 1c 24 mov (%rsp),%rbx
3b13: 4d 89 e8 mov %r13,%r8
3b16: b9 08 00 00 00 mov $0x8,%ecx
3b1b: 4c 89 e7 mov %r12,%rdi
3b1e: 48 c1 e2 20 shl $0x20,%rdx
3b22: 48 09 d0 or %rdx,%rax
3b25: 48 8d 94 24 38 01 00 lea 0x138(%rsp),%rdx
3b2c: 00
3b2d: 48 89 84 24 38 01 00 mov %rax,0x138(%rsp)
3b34: 00
3b35: ff 53 20 callq *0x20(%rbx)
3b38: 85 c0 test %eax,%eax
3b3a: 41 89 c7 mov %eax,%r15d
3b3d: 0f 85 61 f0 ff ff jne 2ba4 <x86_emulate_memop+0x2b17>
&new, 8, ctxt)) != 0)
goto done;
_eflags |= EFLG_ZF;
3b43: 48 83 8c 24 48 01 00 orq $0x40,0x148(%rsp)
3b4a: 00 40
3b4c: e9 1d f0 ff ff jmpq 2b6e <x86_emulate_memop+0x2ae1>
3b51: 83 c8 ff or $0xffffffffffffffff,%eax
3b54: eb 28 jmp 3b7e <x86_emulate_memop+0x3af1>
3b56: c7 84 24 14 01 00 00 movl $0x1,0x114(%rsp)
3b5d: 01 00 00 00
3b61: e9 2b ce ff ff jmpq 991 <x86_emulate_memop+0x904>
3b66: c7 84 24 14 01 00 00 movl $0x1,0x114(%rsp)
3b6d: 01 00 00 00
3b71: e9 ad e9 ff ff jmpq 2523 <x86_emulate_memop+0x2496>
3b76: 45 31 ff xor %r15d,%r15d
3b79: e9 f0 ef ff ff jmpq 2b6e <x86_emulate_memop+0x2ae1>
}
break;
}
}
goto writeback;
cannot_emulate:
DPRINTF("Cannot emulate %02x\n", b);
return -1;
}
3b7e: 48 81 c4 68 01 00 00 add $0x168,%rsp
3b85: 5b pop %rbx
3b86: 5d pop %rbp
3b87: 41 5c pop %r12
3b89: 41 5d pop %r13
3b8b: 41 5e pop %r14
3b8d: 41 5f pop %r15
3b8f: c3 retq
[-- Attachment #3: Type: text/plain, Size: 315 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
[-- Attachment #4: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
prev parent reply other threads:[~2007-07-24 12:45 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-07-19 11:32 KVM-29 + Windows Server 2003 = kernel panic Alessandro Sardo
[not found] ` <469F4BE5.4040801-8RLafaVCWuNeoWH0uzbU5w@public.gmane.org>
2007-07-19 11:36 ` Avi Kivity
[not found] ` <469F7A34.4070606@polito.it>
[not found] ` <469F7F33.7040702@qumranet.com>
[not found] ` <469F7F33.7040702-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-07-23 14:26 ` Alessandro Sardo
2007-07-23 14:27 ` Alessandro Sardo
[not found] ` <46A4BAD5.6020906-8RLafaVCWuNeoWH0uzbU5w@public.gmane.org>
2007-07-24 11:12 ` KVM-33 + Windows Server 2003 = VMX->OK / SVM->kernel panic? Alessandro Sardo
[not found] ` <46A5DE99.6040407-8RLafaVCWuNeoWH0uzbU5w@public.gmane.org>
2007-07-24 11:30 ` Alexey Eremenko
[not found] ` <7fac565a0707240430w73393f46w729378a636f08ec2-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2007-07-24 11:36 ` Alexey Eremenko
2007-07-24 12:27 ` Avi Kivity
[not found] ` <46A5F029.4000002-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-07-24 12:45 ` Alessandro Sardo [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=46A5F486.40302@polito.it \
--to=sandro.sardo-8rlafavcwuneowh0uzbu5w@public.gmane.org \
--cc=kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox