xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* hvmemul_rep_movs() vs MMIO
@ 2013-09-20 11:46 Jan Beulich
  2013-09-20 12:05 ` Tim Deegan
  0 siblings, 1 reply; 5+ messages in thread
From: Jan Beulich @ 2013-09-20 11:46 UTC (permalink / raw)
  To: Tim Deegan; +Cc: xen-devel

Tim,

was it really intended for "x86/hvm: use unlocked p2m lookups in
hvmemul_rep_movs()" to special case p2m_mmio_dm but not
p2m_mmio_direct?

Jan

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: hvmemul_rep_movs() vs MMIO
  2013-09-20 11:46 hvmemul_rep_movs() vs MMIO Jan Beulich
@ 2013-09-20 12:05 ` Tim Deegan
  2013-09-20 12:41   ` Jan Beulich
  0 siblings, 1 reply; 5+ messages in thread
From: Tim Deegan @ 2013-09-20 12:05 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel

At 12:46 +0100 on 20 Sep (1379681171), Jan Beulich wrote:
> Tim,
> 
> was it really intended for "x86/hvm: use unlocked p2m lookups in
> hvmemul_rep_movs()" to special case p2m_mmio_dm but not
> p2m_mmio_direct?

Hmm.  It certainly doesn't seem to handle that case very well now, but
I'm not sure the code before was any better.  AFAICT it would have
passed mmio_direct accesses to hvmemul_do_mmio(), which would send them
to qemu.

Tim.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: hvmemul_rep_movs() vs MMIO
  2013-09-20 12:05 ` Tim Deegan
@ 2013-09-20 12:41   ` Jan Beulich
  2013-09-20 12:47     ` Tim Deegan
  0 siblings, 1 reply; 5+ messages in thread
From: Jan Beulich @ 2013-09-20 12:41 UTC (permalink / raw)
  To: Tim Deegan; +Cc: xen-devel

>>> On 20.09.13 at 14:05, Tim Deegan <tim@xen.org> wrote:
> At 12:46 +0100 on 20 Sep (1379681171), Jan Beulich wrote:
>> Tim,
>> 
>> was it really intended for "x86/hvm: use unlocked p2m lookups in
>> hvmemul_rep_movs()" to special case p2m_mmio_dm but not
>> p2m_mmio_direct?
> 
> Hmm.  It certainly doesn't seem to handle that case very well now, but
> I'm not sure the code before was any better.  AFAICT it would have
> passed mmio_direct accesses to hvmemul_do_mmio(), which would send them
> to qemu.

Hmm, wait - if MMIO of a passed through device, other than for
its port I/O, doesn't get intercepted at all, but instead gets taken
care of by there being a valid gfn->mfn translation in place, then
indeed before and after your change things aren't handled well.
Perhaps we should bail from there if either side is p2m_mmio_direct
as well as if both sides are p2m_mmio_dm:

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -799,6 +799,10 @@ static int hvmemul_rep_movs(
     (void) get_gfn_query_unlocked(current->domain, sgpa >> PAGE_SHIFT, &sp2mt);
     (void) get_gfn_query_unlocked(current->domain, dgpa >> PAGE_SHIFT, &dp2mt);
 
+    if ( sp2mt == p2m_mmio_direct || dp2mt == p2m_mmio_direct ||
+         (sp2mt == p2m_mmio_dm && dp2mt == p2m_mmio_dm) )
+        return X86EMUL_UNHANDLEABLE;
+
     if ( sp2mt == p2m_mmio_dm )
         return hvmemul_do_mmio(
             sgpa, reps, bytes_per_rep, dgpa, IOREQ_READ, df, NULL);

Jan

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: hvmemul_rep_movs() vs MMIO
  2013-09-20 12:41   ` Jan Beulich
@ 2013-09-20 12:47     ` Tim Deegan
  2013-09-20 12:53       ` Jan Beulich
  0 siblings, 1 reply; 5+ messages in thread
From: Tim Deegan @ 2013-09-20 12:47 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel

At 13:41 +0100 on 20 Sep (1379684515), Jan Beulich wrote:
> >>> On 20.09.13 at 14:05, Tim Deegan <tim@xen.org> wrote:
> > At 12:46 +0100 on 20 Sep (1379681171), Jan Beulich wrote:
> >> Tim,
> >> 
> >> was it really intended for "x86/hvm: use unlocked p2m lookups in
> >> hvmemul_rep_movs()" to special case p2m_mmio_dm but not
> >> p2m_mmio_direct?
> > 
> > Hmm.  It certainly doesn't seem to handle that case very well now, but
> > I'm not sure the code before was any better.  AFAICT it would have
> > passed mmio_direct accesses to hvmemul_do_mmio(), which would send them
> > to qemu.
> 
> Hmm, wait - if MMIO of a passed through device, other than for
> its port I/O, doesn't get intercepted at all, but instead gets taken
> care of by there being a valid gfn->mfn translation in place, then
> indeed before and after your change things aren't handled well.
> Perhaps we should bail from there if either side is p2m_mmio_direct
> as well as if both sides are p2m_mmio_dm:

Sounds OK to me.  Clearly mmio-mmio string operations aren't something
we need to make go fast. :)

Reviewed-by: Tim Deegan <tim@xen.org>

> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -799,6 +799,10 @@ static int hvmemul_rep_movs(
>      (void) get_gfn_query_unlocked(current->domain, sgpa >> PAGE_SHIFT, &sp2mt);
>      (void) get_gfn_query_unlocked(current->domain, dgpa >> PAGE_SHIFT, &dp2mt);
>  
> +    if ( sp2mt == p2m_mmio_direct || dp2mt == p2m_mmio_direct ||
> +         (sp2mt == p2m_mmio_dm && dp2mt == p2m_mmio_dm) )
> +        return X86EMUL_UNHANDLEABLE;
> +
>      if ( sp2mt == p2m_mmio_dm )
>          return hvmemul_do_mmio(
>              sgpa, reps, bytes_per_rep, dgpa, IOREQ_READ, df, NULL);
> 
> Jan
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: hvmemul_rep_movs() vs MMIO
  2013-09-20 12:47     ` Tim Deegan
@ 2013-09-20 12:53       ` Jan Beulich
  0 siblings, 0 replies; 5+ messages in thread
From: Jan Beulich @ 2013-09-20 12:53 UTC (permalink / raw)
  To: Tim Deegan; +Cc: xen-devel

>>> On 20.09.13 at 14:47, Tim Deegan <tim@xen.org> wrote:
> At 13:41 +0100 on 20 Sep (1379684515), Jan Beulich wrote:
>> >>> On 20.09.13 at 14:05, Tim Deegan <tim@xen.org> wrote:
>> > At 12:46 +0100 on 20 Sep (1379681171), Jan Beulich wrote:
>> >> Tim,
>> >> 
>> >> was it really intended for "x86/hvm: use unlocked p2m lookups in
>> >> hvmemul_rep_movs()" to special case p2m_mmio_dm but not
>> >> p2m_mmio_direct?
>> > 
>> > Hmm.  It certainly doesn't seem to handle that case very well now, but
>> > I'm not sure the code before was any better.  AFAICT it would have
>> > passed mmio_direct accesses to hvmemul_do_mmio(), which would send them
>> > to qemu.
>> 
>> Hmm, wait - if MMIO of a passed through device, other than for
>> its port I/O, doesn't get intercepted at all, but instead gets taken
>> care of by there being a valid gfn->mfn translation in place, then
>> indeed before and after your change things aren't handled well.
>> Perhaps we should bail from there if either side is p2m_mmio_direct
>> as well as if both sides are p2m_mmio_dm:
> 
> Sounds OK to me.  Clearly mmio-mmio string operations aren't something
> we need to make go fast. :)

Who knows: Frame buffer internal moves might e.g. be desirable
to get accelerated. But if we want to do such, be need to honor
the original access size at the very least.

> Reviewed-by: Tim Deegan <tim@xen.org>

Actually the final patch is to be bigger - we need to do something
similar in rep_ins and rep_outs. Will send the whole thing in a few
minutes (after smoke testing).

Jan

>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -799,6 +799,10 @@ static int hvmemul_rep_movs(
>>      (void) get_gfn_query_unlocked(current->domain, sgpa >> PAGE_SHIFT, &sp2mt);
>>      (void) get_gfn_query_unlocked(current->domain, dgpa >> PAGE_SHIFT, &dp2mt);
>>  
>> +    if ( sp2mt == p2m_mmio_direct || dp2mt == p2m_mmio_direct ||
>> +         (sp2mt == p2m_mmio_dm && dp2mt == p2m_mmio_dm) )
>> +        return X86EMUL_UNHANDLEABLE;
>> +
>>      if ( sp2mt == p2m_mmio_dm )
>>          return hvmemul_do_mmio(
>>              sgpa, reps, bytes_per_rep, dgpa, IOREQ_READ, df, NULL);
>> 
>> Jan
>> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-09-20 12:53 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-20 11:46 hvmemul_rep_movs() vs MMIO Jan Beulich
2013-09-20 12:05 ` Tim Deegan
2013-09-20 12:41   ` Jan Beulich
2013-09-20 12:47     ` Tim Deegan
2013-09-20 12:53       ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).