public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* BUG: virtio_mmio multi-queue competely  broken -- virtio *registers* considered harmful
@ 2013-05-02  3:40 Tom Lyon
  2013-05-02  5:25 ` Michael S. Tsirkin
  2013-05-02 11:37 ` Pawel Moll
  0 siblings, 2 replies; 5+ messages in thread
From: Tom Lyon @ 2013-05-02  3:40 UTC (permalink / raw)
  To: rusty, mst, pawel.moll, virtualization, kvm

Virtiio_mmio attempts to mimic the layout of some control registers from 
virtio_pci.  These registers, in particular VIRTIO_MMIO_QUEUE_SEL and 
VIRTIO_PCI_QUEUE_SEL,
are active in nature, and not just passive like a normal memory 
location.  Thus, the host side must react immediately upon write of 
these registers to map some other registers (queue address, size, etc) 
to queue-specific locations.  This is just not possible for mmio, and, I 
would argue, not desirable for PCI either.

Because the queue selector register doesn't work in mmio, it is clear 
that only single queue virtio devices can work.  This means no 
virtio_net - I've seen a few messages
complaining that it doesn't work but nothing so far on why.

It seems from some messages back in March that there is a register 
re-layout in the works for virtio_pci.  I think that virtio_pci could 
become just one of the
various ways to configure a virtio_mmio device and there would no need 
for any "registers", just memory locations acting like memory. The one 
gotcha is in
figuring out the kick/notify mechanism for the guest to notify the host 
when there is work on a queue.  For notify, using a hypervisor call 
could unify the pci and mmio
cases, but comes with the cost of leaving the pure pci domain.

I got into this code because I am looking at the possibility of using an 
off the shelf embedded processor sitting on a PCIe port to emulate the 
virtio pci interface.  The
notion of active registers makes this a non-starter, whereas if there 
was a purely memory based system like mmio (with mq fixes), a real PCI 
device could easily emulate it.
Excepting, of course, whatever the notify mechanism is.  If it were 
hypercall based, then the hypervisor could call a transport or device 
specific way of notifying and a small
notify driver could poke the PCI device is some way.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: BUG: virtio_mmio multi-queue competely  broken -- virtio *registers* considered harmful
  2013-05-02  3:40 BUG: virtio_mmio multi-queue competely broken -- virtio *registers* considered harmful Tom Lyon
@ 2013-05-02  5:25 ` Michael S. Tsirkin
  2013-05-02 15:05   ` Tom Lyon
  2013-05-02 11:37 ` Pawel Moll
  1 sibling, 1 reply; 5+ messages in thread
From: Michael S. Tsirkin @ 2013-05-02  5:25 UTC (permalink / raw)
  To: Tom Lyon; +Cc: kvm, pawel.moll, virtualization

On Wed, May 01, 2013 at 08:40:54PM -0700, Tom Lyon wrote:
> Virtiio_mmio attempts to mimic the layout of some control registers
> from virtio_pci.  These registers, in particular
> VIRTIO_MMIO_QUEUE_SEL and VIRTIO_PCI_QUEUE_SEL,
> are active in nature, and not just passive like a normal memory
> location.  Thus, the host side must react immediately upon write of
> these registers to map some other registers (queue address, size,
> etc) to queue-specific locations.  This is just not possible for
> mmio, and, I would argue, not desirable for PCI either.
> 
> Because the queue selector register doesn't work in mmio, it is
> clear that only single queue virtio devices can work.  This means no
> virtio_net - I've seen a few messages
> complaining that it doesn't work but nothing so far on why.
> 
> It seems from some messages back in March that there is a register
> re-layout in the works for virtio_pci.  I think that virtio_pci
> could become just one of the
> various ways to configure a virtio_mmio device and there would no
> need for any "registers", just memory locations acting like memory.
> The one gotcha is in
> figuring out the kick/notify mechanism for the guest to notify the
> host when there is work on a queue.  For notify, using a hypervisor
> call could unify the pci and mmio
> cases, but comes with the cost of leaving the pure pci domain.
> 
> I got into this code because I am looking at the possibility of
> using an off the shelf embedded processor sitting on a PCIe port to
> emulate the virtio pci interface.  The
> notion of active registers makes this a non-starter, whereas if
> there was a purely memory based system like mmio (with mq fixes), a
> real PCI device could easily emulate it.
> Excepting, of course, whatever the notify mechanism is.  If it were
> hypercall based, then the hypervisor could call a transport or
> device specific way of notifying and a small
> notify driver could poke the PCI device is some way.


This was discussed on this thread:
	'[PATCH 16/22] virtio_pci: use separate notification offsets for each vq'
Please take a look there and confirm that this addresses your concern.
I'm working on making memory io as fast as pio on x86,
implemented on intel, once I do it on AMD too and assuming it's
as fast as PIO, we'll do mmio everywhere.
Then with a PCI card, you won't have exits for notification just normal
passthrough.


-- 
MST

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: BUG: virtio_mmio multi-queue competely  broken -- virtio *registers* considered harmful
  2013-05-02  3:40 BUG: virtio_mmio multi-queue competely broken -- virtio *registers* considered harmful Tom Lyon
  2013-05-02  5:25 ` Michael S. Tsirkin
@ 2013-05-02 11:37 ` Pawel Moll
  2013-05-02 15:17   ` Tom Lyon
  1 sibling, 1 reply; 5+ messages in thread
From: Pawel Moll @ 2013-05-02 11:37 UTC (permalink / raw)
  To: Tom Lyon
  Cc: rusty@rustcorp.com.au, mst@redhat.com,
	virtualization@lists.linux-foundation.org, kvm@vger.kernel.org

Hi Tom,

On Thu, 2013-05-02 at 04:40 +0100, Tom Lyon wrote:
> Virtiio_mmio attempts to mimic the layout of some control registers from 
> virtio_pci.  These registers, in particular VIRTIO_MMIO_QUEUE_SEL and 
> VIRTIO_PCI_QUEUE_SEL,
> are active in nature, and not just passive like a normal memory 
> location.  Thus, the host side must react immediately upon write of 
> these registers to map some other registers (queue address, size, etc) 
> to queue-specific locations.  This is just not possible for mmio, and, I 
> would argue, not desirable for PCI either.

Could you, please, elaborate more about the environment you are talking
about here?

The intention of the MMIO device is to behave like a normal memory
mapped peripheral, say serial port. In the world of architecture without
separate I/O address space (like ARM, MIPS, SH-4 to name only those I
know anything about), such peripherals are usually mapped into the
virtual address space with special attributes, eg. guaranteeing
transactions order. That's why the host can "react immediately" and to
my knowledge multi-queue virtio devices work just fine.

I'd love see comments from someone with x86 expertise in such areas.
Maybe we are missing some memory barriers here? So the host
implementation would have a chance to react to the QUEUE_SEL access
before servicing other transactions?

Regards

Paweł



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: BUG: virtio_mmio multi-queue competely  broken -- virtio *registers* considered harmful
  2013-05-02  5:25 ` Michael S. Tsirkin
@ 2013-05-02 15:05   ` Tom Lyon
  0 siblings, 0 replies; 5+ messages in thread
From: Tom Lyon @ 2013-05-02 15:05 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: rusty, pawel.moll, virtualization, kvm

On 05/01/2013 10:25 PM, Michael S. Tsirkin wrote:
> On Wed, May 01, 2013 at 08:40:54PM -0700, Tom Lyon wrote:
>> Virtiio_mmio attempts to mimic the layout of some control registers
>> from virtio_pci.  These registers, in particular
>> VIRTIO_MMIO_QUEUE_SEL and VIRTIO_PCI_QUEUE_SEL,
>> are active in nature, and not just passive like a normal memory
>> location.  Thus, the host side must react immediately upon write of
>> these registers to map some other registers (queue address, size,
>> etc) to queue-specific locations.  This is just not possible for
>> mmio, and, I would argue, not desirable for PCI either.
>>
>> Because the queue selector register doesn't work in mmio, it is
>> clear that only single queue virtio devices can work.  This means no
>> virtio_net - I've seen a few messages
>> complaining that it doesn't work but nothing so far on why.
>>
>> It seems from some messages back in March that there is a register
>> re-layout in the works for virtio_pci.  I think that virtio_pci
>> could become just one of the
>> various ways to configure a virtio_mmio device and there would no
>> need for any "registers", just memory locations acting like memory.
>> The one gotcha is in
>> figuring out the kick/notify mechanism for the guest to notify the
>> host when there is work on a queue.  For notify, using a hypervisor
>> call could unify the pci and mmio
>> cases, but comes with the cost of leaving the pure pci domain.
>>
>> I got into this code because I am looking at the possibility of
>> using an off the shelf embedded processor sitting on a PCIe port to
>> emulate the virtio pci interface.  The
>> notion of active registers makes this a non-starter, whereas if
>> there was a purely memory based system like mmio (with mq fixes), a
>> real PCI device could easily emulate it.
>> Excepting, of course, whatever the notify mechanism is.  If it were
>> hypercall based, then the hypervisor could call a transport or
>> device specific way of notifying and a small
>> notify driver could poke the PCI device is some way.
>
> This was discussed on this thread:
> 	'[PATCH 16/22] virtio_pci: use separate notification offsets for each vq'
> Please take a look there and confirm that this addresses your concern.
> I'm working on making memory io as fast as pio on x86,
> implemented on intel, once I do it on AMD too and assuming it's
> as fast as PIO, we'll do mmio everywhere.
> Then with a PCI card, you won't have exits for notification just normal
> passthrough.
>
>
Yes, I had seen that thread. It addresses my concerns for pci, but not 
mmio, although I slightly favor a hypercall notify mechanism over a pci 
write.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: BUG: virtio_mmio multi-queue competely  broken -- virtio *registers* considered harmful
  2013-05-02 11:37 ` Pawel Moll
@ 2013-05-02 15:17   ` Tom Lyon
  0 siblings, 0 replies; 5+ messages in thread
From: Tom Lyon @ 2013-05-02 15:17 UTC (permalink / raw)
  To: Pawel Moll
  Cc: rusty@rustcorp.com.au, mst@redhat.com,
	virtualization@lists.linux-foundation.org, kvm@vger.kernel.org

On 05/02/2013 04:37 AM, Pawel Moll wrote:
> Hi Tom,
>
> On Thu, 2013-05-02 at 04:40 +0100, Tom Lyon wrote:
>> Virtiio_mmio attempts to mimic the layout of some control registers from
>> virtio_pci.  These registers, in particular VIRTIO_MMIO_QUEUE_SEL and
>> VIRTIO_PCI_QUEUE_SEL,
>> are active in nature, and not just passive like a normal memory
>> location.  Thus, the host side must react immediately upon write of
>> these registers to map some other registers (queue address, size, etc)
>> to queue-specific locations.  This is just not possible for mmio, and, I
>> would argue, not desirable for PCI either.
> Could you, please, elaborate more about the environment you are talking
> about here?
>
> The intention of the MMIO device is to behave like a normal memory
> mapped peripheral, say serial port. In the world of architecture without
> separate I/O address space (like ARM, MIPS, SH-4 to name only those I
> know anything about), such peripherals are usually mapped into the
> virtual address space with special attributes, eg. guaranteeing
> transactions order. That's why the host can "react immediately" and to
> my knowledge multi-queue virtio devices work just fine.
>
> I'd love see comments from someone with x86 expertise in such areas.
> Maybe we are missing some memory barriers here? So the host
> implementation would have a chance to react to the QUEUE_SEL access
> before servicing other transactions?
>
> Regards
>
> Paweł
>
>
Ah, my mistake. I was assuming that mmio just used shared memory 
regions, not that there was emulated IO registers.  I was driven to that
assumption by looking at the rpmsg use case in which a main processor 
talks to satellite processors - there seems to be no
hypervisor and therefore no emulated IO in that case.  However, looking 
deeper, it seems that rpmsg has its own notify mechanism anyways, so it
is not so cleanly layered on virtio.  Also, for my needs, I want rpmsg 
and virtio_net.

In my desire to use a PCIe card to emulate virtio I am also talking 
about a non-hypervisor use case - just  leveraging the virtio framework 
for inter-processor
communication.  Virtio is so close to being the answer it woud be a 
shame if it wasn't. So I would propose that virtio be restructured such 
that the configuration
layout is identical among virtio transports and acts like memory, the 
configuration location is transport-specific, and the notification 
mechanism is transport specific - but I
could live writing to some queue specific memory/io location (for both 
pci and mmio).



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-05-02 15:24 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-05-02  3:40 BUG: virtio_mmio multi-queue competely broken -- virtio *registers* considered harmful Tom Lyon
2013-05-02  5:25 ` Michael S. Tsirkin
2013-05-02 15:05   ` Tom Lyon
2013-05-02 11:37 ` Pawel Moll
2013-05-02 15:17   ` Tom Lyon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox