From: Anthony Liguori <anthony@codemonkey.ws>
To: Zachary Amsden <zamsden@redhat.com>
Cc: Paul Brook <paul@codesourcery.com>, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [RFC] New device API
Date: Sat, 09 May 2009 08:36:23 -0500 [thread overview]
Message-ID: <4A0586D7.5010008@codemonkey.ws> (raw)
In-Reply-To: <4A04DAA5.4040505@redhat.com>
Zachary Amsden wrote:
> Anthony Liguori wrote:
>
>
>> Yes. I think this is important too. But when we introduce this, we
>> need to make sure the devices pre-register what strings they support and
>> provide human consumable descriptions of what those knobs do.
>> Basically, we should be able to auto extract a hardware documentation
>> file from the device that describes in detail all of the supported knobs.
>>
>
> Yes, I keep falling back to some sort of DEVICETYPE{NNN}.PARAM = "VALUE"
> scheme, sort of like .vmx config files. However, they failed horribly
> in not complaining about unparsed parameters; silently ignoring config
> data is wrong, and pre-registration should be required, as it stops both
> silent typo failures and collision.
>
Exactly.
>> For the most part, I think the device should be unaware of these
>> things. It never needs to see it's devfn. It should preregister what
>> lnks it supports and whether it supports MSI, but it should never know
>> what IRQs that actually gets routed to.
>>
>
> Ideally, but I think in practice the line will blur.. unless you have a
> completely ideal bus abstraction, there will still be the need to fill
> in reads from configuration space and associated device specific side
> effects.
>
Perhaps, but I want to get away from any assumption that we're saying
PCI device 00:01:04.4 is connected to IRQ 10 in a configuration file.
How a PCI LNK gets routed to an actual IRQ depends on a lot of things
and to support something like this, you'd need rather sophisticated
autogeneration of ACPI, dynamic allocation of number of LNKs, etc. PCI
devices internally determine how many LNKs they use. In the config
file, we should merely be saying create PCI device 00:01:04.4 with
vendor ID X and device ID Y. Then we should be able to add device
specific configuration bits (for virtio-net, for instance, we may say
how large the queue size is).
Details of interrupt routing, BAR location, etc. are out of scope IMHO.
>> A device registers its creation function in it's module_init(). This creation function will then register the fact that it's a PCI device and will basic information about itself in that registration. A PCI device can be instantiated via a device configuration file and when that happens, the device will create a host device for whatever functionality it supports. For a NIC, this would be a NetworkHostDevice or something like that. This NetworkHostDevice would have some link to the device itself (via an id, handle, whatever). A user can then create a NetworkHostDriver and attach the NetworkHostDevice to that driver and you then have a functional emulated NIC.
>>
>
> This sounds pretty much ideal, I would say, but the details are really
> in "will create a host device for whatever functionality". Is there a
> plan to lay out frontend APIs(1) for the various types of devices
> (sound, pointer, net, SCSI) so we can have modularized host driver backends?
>
Yes, and I don't think we're that far away from that today. I think the
block driver API is shaping up nicely. We've almost got an ideal API.
It'll be a bit more work to remove all of the legacy aspects of the API.
The networking API needs a revamp but I think there's general consensus
on how to do that. DisplayState is getting good too but it needs input
support too. Also, we have to improve how the TextConsole and
multiplexing work today to be implemented as proper DisplayState
layering instead of the hackery that exists today.
> (1) With APIs being flexible, I don't mean a fixed link-type module, I
> mean well-modularized code that makes the 5 architectures x 50 devices x
> 4 bus models x 3 host implementations less of a problem than it
> currently is.
>
Yes, I think that's what we're aiming for here.
Regards,
Anthony Liguori
next prev parent reply other threads:[~2009-05-09 13:36 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-05-05 11:31 [Qemu-devel] [RFC] New device API Paul Brook
2009-05-05 15:56 ` Blue Swirl
2009-05-05 16:17 ` Paul Brook
2009-05-05 16:26 ` Blue Swirl
2009-05-05 16:35 ` Paul Brook
2009-05-05 18:22 ` Anthony Liguori
2009-05-05 22:42 ` Edgar E. Iglesias
2009-05-06 0:52 ` Paul Brook
2009-05-06 1:04 ` Paul Brook
2009-05-06 13:35 ` Anthony Liguori
2009-05-09 20:55 ` Anthony Liguori
2009-05-09 21:06 ` Paul Brook
2009-05-10 1:34 ` Anthony Liguori
2009-05-09 22:52 ` malc
2009-05-10 1:35 ` Anthony Liguori
2009-05-10 6:50 ` Andreas Färber
2009-05-10 18:38 ` malc
2009-05-10 1:37 ` Anthony Liguori
2009-05-05 22:25 ` Edgar E. Iglesias
2009-05-08 1:54 ` Zachary Amsden
2009-05-08 11:28 ` Paul Brook
2009-05-08 13:47 ` Anthony Liguori
2009-05-09 1:21 ` Zachary Amsden
2009-05-09 13:36 ` Anthony Liguori [this message]
2009-05-08 5:27 ` Marcelo Tosatti
2009-05-08 10:44 ` Paul Brook
2009-05-28 13:53 ` Markus Armbruster
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A0586D7.5010008@codemonkey.ws \
--to=anthony@codemonkey.ws \
--cc=paul@codesourcery.com \
--cc=qemu-devel@nongnu.org \
--cc=zamsden@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).