From: Gleb Natapov <gleb@redhat.com>
To: Avi Kivity <avi@redhat.com>
Cc: kvm@vger.kernel.org, mtosatti@redhat.com
Subject: Re: [PATCHv5 4/4] KVM: emulator: optimize "rep ins" handling.
Date: Mon, 6 Aug 2012 11:58:46 +0300 [thread overview]
Message-ID: <20120806085846.GV27579@redhat.com> (raw)
In-Reply-To: <501F854C.6070005@redhat.com>
On Mon, Aug 06, 2012 at 11:50:20AM +0300, Avi Kivity wrote:
> On 07/30/2012 05:38 PM, Gleb Natapov wrote:
> > Optimize "rep ins" by allowing emulator to write back more than one
> > datum at a time. Introduce new operand type OP_MEM_STR which tells
> > writeback() that dst contains pointer to an array that should be written
> > back as opposite to just one data element.
> >
> > }
> >
> > - memcpy(dest, rc->data + rc->pos, size);
> > - rc->pos += size;
> > + if (ctxt->rep_prefix && !(ctxt->eflags & EFLG_DF)) {
> > + ctxt->dst.data = rc->data + rc->pos;
> > + ctxt->dst.type = OP_MEM_STR;
> > + ctxt->dst.count = (rc->end - rc->pos) / size;
> > + rc->pos = rc->end;
>
> Should take into account the segment limit.
>
It does. During write back. pio_in_emulated() should linearize() address
before calculating page boundary, but this is (minor) bug unrelated to the patch
series.
> > + } else {
> > + memcpy(dest, rc->data + rc->pos, size);
> > + rc->pos += size;
> > + }
> > return 1;
> > }
> >
> > @@ -1500,6 +1507,14 @@ static int writeback(struct x86_emulate_ctxt *ctxt)
> > if (rc != X86EMUL_CONTINUE)
> > return rc;
> > break;
> > + case OP_MEM_STR:
> > + rc = segmented_write(ctxt,
> > + ctxt->dst.addr.mem,
> > + ctxt->dst.data,
> > + ctxt->dst.bytes * ctxt->dst.count);
> > + if (rc != X86EMUL_CONTINUE)
> > + return rc;
> > + break;
> > case OP_XMM:
> > write_sse_reg(ctxt, &ctxt->dst.vec_val, ctxt->dst.addr.xmm);
> > break;
> > @@ -2732,7 +2747,7 @@ int emulator_task_switch(struct x86_emulate_ctxt *ctxt,
> > static void string_addr_inc(struct x86_emulate_ctxt *ctxt, int reg,
> > struct operand *op)
> > {
> > - int df = (ctxt->eflags & EFLG_DF) ? -1 : 1;
> > + int df = (ctxt->eflags & EFLG_DF) ? -op->count : op->count;
> >
> > register_address_increment(ctxt, &ctxt->regs[reg], df * op->bytes);
> > op->addr.mem.ea = register_address(ctxt, ctxt->regs[reg]);
> > @@ -3672,7 +3687,7 @@ static struct opcode opcode_table[256] = {
> > I(DstReg | SrcMem | ModRM | Src2Imm, em_imul_3op),
> > I(SrcImmByte | Mov | Stack, em_push),
> > I(DstReg | SrcMem | ModRM | Src2ImmByte, em_imul_3op),
> > - I2bvIP(DstDI | SrcDX | Mov | String, em_in, ins, check_perm_in), /* insb, insw/insd */
> > + I2bvIP(DstDI | SrcDX | Mov | String | Unaligned, em_in, ins, check_perm_in), /* insb, insw/insd */
>
> Eww.
This brings us back to the question what alignment check is doing in
linearize :)
>
> > I2bvIP(SrcSI | DstDX | String, em_out, outs, check_perm_out), /* outsb, outsw/outsd */
> > /* 0x70 - 0x7F */
> > X16(D(SrcImmByte)),
> > @@ -3930,6 +3945,7 @@ static int decode_operand(struct x86_emulate_ctxt *ctxt, struct operand *op,
> > register_address(ctxt, ctxt->regs[VCPU_REGS_RDI]);
> > op->addr.mem.seg = VCPU_SREG_ES;
> > op->val = 0;
> > + op->count = 1;
> > break;
> > case OpDX:
> > op->type = OP_REG;
> > @@ -3973,6 +3989,7 @@ static int decode_operand(struct x86_emulate_ctxt *ctxt, struct operand *op,
> > register_address(ctxt, ctxt->regs[VCPU_REGS_RSI]);
> > op->addr.mem.seg = seg_override(ctxt);
> > op->val = 0;
> > + op->count = 1;
> > break;
> > case OpImmFAddr:
> > op->type = OP_IMM;
> > @@ -4513,8 +4530,14 @@ writeback:
> > string_addr_inc(ctxt, VCPU_REGS_RDI, &ctxt->dst);
> >
> > if (ctxt->rep_prefix && (ctxt->d & String)) {
> > + unsigned int count;
> > struct read_cache *r = &ctxt->io_read;
> > - register_address_increment(ctxt, &ctxt->regs[VCPU_REGS_RCX], -1);
> > + if ((ctxt->d & SrcMask) == SrcSI)
> > + count = ctxt->src.count;
> > + else
> > + count = ctxt->dst.count;
>
> Does this work correctly for 'rep movs' and friends?
>
(src|dst).count is initialized to 1 during decode, so anything that does
not touch "count" behaves exactly like before.
--
Gleb.
next prev parent reply other threads:[~2012-08-06 8:58 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-30 14:38 [PATCHv5 0/4] improve speed of "rep ins" emulation Gleb Natapov
2012-07-30 14:38 ` [PATCHv5 1/4] Provide userspace IO exit completion callback Gleb Natapov
2012-08-02 19:26 ` Marcelo Tosatti
2012-08-05 14:49 ` Gleb Natapov
2012-07-30 14:38 ` [PATCHv5 2/4] KVM: emulator: make x86 emulation modes enum instead of defines Gleb Natapov
2012-07-30 14:38 ` [PATCHv5 3/4] KVM: emulator: string_addr_inc() cleanup Gleb Natapov
2012-07-30 14:38 ` [PATCHv5 4/4] KVM: emulator: optimize "rep ins" handling Gleb Natapov
2012-08-05 15:03 ` Avi Kivity
2012-08-05 15:18 ` Gleb Natapov
2012-08-05 15:20 ` Avi Kivity
2012-08-06 8:50 ` Avi Kivity
2012-08-06 8:58 ` Gleb Natapov [this message]
2012-08-06 9:28 ` Avi Kivity
2012-08-06 11:05 ` Gleb Natapov
2012-08-06 11:39 ` Avi Kivity
2012-08-06 11:49 ` Gleb Natapov
2012-08-06 12:08 ` Avi Kivity
2012-08-07 12:07 ` Gleb Natapov
2012-08-13 14:39 ` [PATCHv5 0/4] improve speed of "rep ins" emulation Richard W.M. Jones
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120806085846.GV27579@redhat.com \
--to=gleb@redhat.com \
--cc=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).