* [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it
2023-11-21 22:10 [PATCH v2 0/6] xen-arm: add support for virtio-pci Volodymyr Babchuk
@ 2023-11-21 22:10 ` Volodymyr Babchuk
2023-11-22 15:54 ` Paul Durrant
2023-11-22 17:05 ` Paul Durrant
2023-11-21 22:10 ` [PATCH v2 3/6] xen: xenstore: add possibility to preserve owner Volodymyr Babchuk
` (4 subsequent siblings)
5 siblings, 2 replies; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-21 22:10 UTC (permalink / raw)
To: qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall, Anthony Perard,
Paul Durrant, Kevin Wolf, Hanna Reitz, Marc-André Lureau,
Paolo Bonzini, Jason Wang, open list:X86 Xen CPUs,
open list:Block layer core
From: David Woodhouse <dwmw@amazon.co.uk>
This allows a XenDevice implementation to know whether it was created
by QEMU, or merely discovered in XenStore after the toolstack created
it. This will allow us to create frontend/backend nodes only when we
should, rather than unconditionally attempting to overwrite them from
a driver domain which doesn't have privileges to do so.
As an added benefit, it also means we no longer have to call the
xen_backend_set_device() function from the device models immediately
after calling qdev_realize_and_unref(). Even though we could make
the argument that it's safe to do so, and the pointer to the unreffed
device *will* actually still be valid, it still made my skin itch to
look at it.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
hw/block/xen-block.c | 3 +--
hw/char/xen_console.c | 2 +-
hw/net/xen_nic.c | 2 +-
hw/xen/xen-bus.c | 4 ++++
include/hw/xen/xen-backend.h | 2 --
include/hw/xen/xen-bus.h | 2 ++
6 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 6d64ede94f..c2ac9db4a2 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -1081,13 +1081,12 @@ static void xen_block_device_create(XenBackendInstance *backend,
blockdev->iothread = iothread;
blockdev->drive = drive;
+ xendev->backend = backend;
if (!qdev_realize_and_unref(DEVICE(xendev), BUS(xenbus), errp)) {
error_prepend(errp, "realization of device %s failed: ", type);
goto fail;
}
-
- xen_backend_set_device(backend, xendev);
return;
fail:
diff --git a/hw/char/xen_console.c b/hw/char/xen_console.c
index 5cbee2f184..bef8a3a621 100644
--- a/hw/char/xen_console.c
+++ b/hw/char/xen_console.c
@@ -600,8 +600,8 @@ static void xen_console_device_create(XenBackendInstance *backend,
goto fail;
}
+ xendev->backend = backend;
if (qdev_realize_and_unref(DEVICE(xendev), BUS(xenbus), errp)) {
- xen_backend_set_device(backend, xendev);
goto done;
}
diff --git a/hw/net/xen_nic.c b/hw/net/xen_nic.c
index af4ba3f1e6..afa10c96e8 100644
--- a/hw/net/xen_nic.c
+++ b/hw/net/xen_nic.c
@@ -627,8 +627,8 @@ static void xen_net_device_create(XenBackendInstance *backend,
net->dev = number;
memcpy(&net->conf.macaddr, &mac, sizeof(mac));
+ xendev->backend = backend;
if (qdev_realize_and_unref(DEVICE(xendev), BUS(xenbus), errp)) {
- xen_backend_set_device(backend, xendev);
return;
}
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index 4973e7d9c9..dd0171ab98 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -1079,6 +1079,10 @@ static void xen_device_realize(DeviceState *dev, Error **errp)
}
}
+ if (xendev->backend) {
+ xen_backend_set_device(xendev->backend, xendev);
+ }
+
xendev->exit.notify = xen_device_exit;
qemu_add_exit_notifier(&xendev->exit);
return;
diff --git a/include/hw/xen/xen-backend.h b/include/hw/xen/xen-backend.h
index 0f01631ae7..ea080ba7c9 100644
--- a/include/hw/xen/xen-backend.h
+++ b/include/hw/xen/xen-backend.h
@@ -10,8 +10,6 @@
#include "hw/xen/xen-bus.h"
-typedef struct XenBackendInstance XenBackendInstance;
-
typedef void (*XenBackendDeviceCreate)(XenBackendInstance *backend,
QDict *opts, Error **errp);
typedef void (*XenBackendDeviceDestroy)(XenBackendInstance *backend,
diff --git a/include/hw/xen/xen-bus.h b/include/hw/xen/xen-bus.h
index 334ddd1ff6..7647c4c38e 100644
--- a/include/hw/xen/xen-bus.h
+++ b/include/hw/xen/xen-bus.h
@@ -14,9 +14,11 @@
#include "qom/object.h"
typedef struct XenEventChannel XenEventChannel;
+typedef struct XenBackendInstance XenBackendInstance;
struct XenDevice {
DeviceState qdev;
+ XenBackendInstance *backend;
domid_t frontend_id;
char *name;
struct qemu_xs_handle *xsh;
--
2.42.0
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it
2023-11-21 22:10 ` [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it Volodymyr Babchuk
@ 2023-11-22 15:54 ` Paul Durrant
2023-11-22 17:05 ` Paul Durrant
1 sibling, 0 replies; 52+ messages in thread
From: Paul Durrant @ 2023-11-22 15:54 UTC (permalink / raw)
To: Volodymyr Babchuk, qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall, Anthony Perard,
Kevin Wolf, Hanna Reitz, Marc-André Lureau, Paolo Bonzini,
Jason Wang, open list:X86 Xen CPUs, open list:Block layer core
On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> This allows a XenDevice implementation to know whether it was created
> by QEMU, or merely discovered in XenStore after the toolstack created
> it. This will allow us to create frontend/backend nodes only when we
> should, rather than unconditionally attempting to overwrite them from
> a driver domain which doesn't have privileges to do so.
>
> As an added benefit, it also means we no longer have to call the
> xen_backend_set_device() function from the device models immediately
> after calling qdev_realize_and_unref(). Even though we could make
> the argument that it's safe to do so, and the pointer to the unreffed
> device *will* actually still be valid, it still made my skin itch to
> look at it.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
> hw/block/xen-block.c | 3 +--
> hw/char/xen_console.c | 2 +-
> hw/net/xen_nic.c | 2 +-
> hw/xen/xen-bus.c | 4 ++++
> include/hw/xen/xen-backend.h | 2 --
> include/hw/xen/xen-bus.h | 2 ++
> 6 files changed, 9 insertions(+), 6 deletions(-)
>
Reviewed-by: Paul Durrant <paul@xen.org>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it
2023-11-21 22:10 ` [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it Volodymyr Babchuk
2023-11-22 15:54 ` Paul Durrant
@ 2023-11-22 17:05 ` Paul Durrant
2023-11-22 22:44 ` Woodhouse, David via
2023-11-22 22:56 ` Volodymyr Babchuk
1 sibling, 2 replies; 52+ messages in thread
From: Paul Durrant @ 2023-11-22 17:05 UTC (permalink / raw)
To: Volodymyr Babchuk, qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall, Anthony Perard,
Kevin Wolf, Hanna Reitz, Marc-André Lureau, Paolo Bonzini,
Jason Wang, open list:X86 Xen CPUs, open list:Block layer core
On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> This allows a XenDevice implementation to know whether it was created
> by QEMU, or merely discovered in XenStore after the toolstack created
> it. This will allow us to create frontend/backend nodes only when we
> should, rather than unconditionally attempting to overwrite them from
> a driver domain which doesn't have privileges to do so.
>
> As an added benefit, it also means we no longer have to call the
> xen_backend_set_device() function from the device models immediately
> after calling qdev_realize_and_unref(). Even though we could make
> the argument that it's safe to do so, and the pointer to the unreffed
> device *will* actually still be valid, it still made my skin itch to
> look at it.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
> hw/block/xen-block.c | 3 +--
> hw/char/xen_console.c | 2 +-
> hw/net/xen_nic.c | 2 +-
> hw/xen/xen-bus.c | 4 ++++
> include/hw/xen/xen-backend.h | 2 --
> include/hw/xen/xen-bus.h | 2 ++
> 6 files changed, 9 insertions(+), 6 deletions(-)
>
Actually, I think you should probably update
xen_backend_try_device_destroy() in this patch too. It currently uses
xen_backend_list_find() to check whether the specified XenDevice has an
associated XenBackendInstance.
Paul
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it
2023-11-22 17:05 ` Paul Durrant
@ 2023-11-22 22:44 ` Woodhouse, David via
2023-11-22 23:49 ` Volodymyr Babchuk
2023-11-22 22:56 ` Volodymyr Babchuk
1 sibling, 1 reply; 52+ messages in thread
From: Woodhouse, David via @ 2023-11-22 22:44 UTC (permalink / raw)
To: Volodymyr_Babchuk@epam.com, qemu-devel@nongnu.org, paul@xen.org
Cc: julien@xen.org, hreitz@redhat.com, anthony.perard@citrix.com,
marcandre.lureau@redhat.com, sstabellini@kernel.org,
qemu-block@nongnu.org, kwolf@redhat.com,
xen-devel@lists.xenproject.org, pbonzini@redhat.com,
jasowang@redhat.com
[-- Attachment #1.1: Type: text/plain, Size: 1750 bytes --]
On Wed, 2023-11-22 at 17:05 +0000, Paul Durrant wrote:
> On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> > From: David Woodhouse <dwmw@amazon.co.uk>
> >
> > This allows a XenDevice implementation to know whether it was created
> > by QEMU, or merely discovered in XenStore after the toolstack created
> > it. This will allow us to create frontend/backend nodes only when we
> > should, rather than unconditionally attempting to overwrite them from
> > a driver domain which doesn't have privileges to do so.
> >
> > As an added benefit, it also means we no longer have to call the
> > xen_backend_set_device() function from the device models immediately
> > after calling qdev_realize_and_unref(). Even though we could make
> > the argument that it's safe to do so, and the pointer to the unreffed
> > device *will* actually still be valid, it still made my skin itch to
> > look at it.
> >
> > Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> > ---
> > hw/block/xen-block.c | 3 +--
> > hw/char/xen_console.c | 2 +-
> > hw/net/xen_nic.c | 2 +-
> > hw/xen/xen-bus.c | 4 ++++
> > include/hw/xen/xen-backend.h | 2 --
> > include/hw/xen/xen-bus.h | 2 ++
> > 6 files changed, 9 insertions(+), 6 deletions(-)
> >
>
> Actually, I think you should probably update
> xen_backend_try_device_destroy() in this patch too. It currently uses
> xen_backend_list_find() to check whether the specified XenDevice has an
> associated XenBackendInstance.
I think I did that in
https://git.infradead.org/users/dwmw2/qemu.git/commitdiff/94f1b474478ce0ede
but didn't get round to sending it out again because of travel.
[-- Attachment #1.2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5957 bytes --]
[-- Attachment #2.1: Type: text/plain, Size: 215 bytes --]
Amazon Development Centre (London) Ltd. Registered in England and Wales with registration number 04543232 with its registered office at 1 Principal Place, Worship Street, London EC2A 2FA, United Kingdom.
[-- Attachment #2.2: Type: text/html, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it
2023-11-22 22:44 ` Woodhouse, David via
@ 2023-11-22 23:49 ` Volodymyr Babchuk
2023-11-22 23:55 ` Woodhouse, David via
0 siblings, 1 reply; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-22 23:49 UTC (permalink / raw)
To: Woodhouse, David
Cc: qemu-devel@nongnu.org, paul@xen.org, julien@xen.org,
hreitz@redhat.com, anthony.perard@citrix.com,
marcandre.lureau@redhat.com, sstabellini@kernel.org,
qemu-block@nongnu.org, kwolf@redhat.com,
xen-devel@lists.xenproject.org, pbonzini@redhat.com,
jasowang@redhat.com
Hi David,
"Woodhouse, David" <dwmw@amazon.co.uk> writes:
> On Wed, 2023-11-22 at 17:05 +0000, Paul Durrant wrote:
>> On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>> > From: David Woodhouse <dwmw@amazon.co.uk>
>> >
>> > This allows a XenDevice implementation to know whether it was created
>> > by QEMU, or merely discovered in XenStore after the toolstack created
>> > it. This will allow us to create frontend/backend nodes only when we
>> > should, rather than unconditionally attempting to overwrite them from
>> > a driver domain which doesn't have privileges to do so.
>> >
>> > As an added benefit, it also means we no longer have to call the
>> > xen_backend_set_device() function from the device models immediately
>> > after calling qdev_realize_and_unref(). Even though we could make
>> > the argument that it's safe to do so, and the pointer to the unreffed
>> > device *will* actually still be valid, it still made my skin itch to
>> > look at it.
>> >
>> > Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
>> > ---
>> > hw/block/xen-block.c | 3 +--
>> > hw/char/xen_console.c | 2 +-
>> > hw/net/xen_nic.c | 2 +-
>> > hw/xen/xen-bus.c | 4 ++++
>> > include/hw/xen/xen-backend.h | 2 --
>> > include/hw/xen/xen-bus.h | 2 ++
>> > 6 files changed, 9 insertions(+), 6 deletions(-)
>> >
>>
>> Actually, I think you should probably update
>> xen_backend_try_device_destroy() in this patch too. It currently uses
>> xen_backend_list_find() to check whether the specified XenDevice has an
>> associated XenBackendInstance.
>
> I think I did that in
> https://git.infradead.org/users/dwmw2/qemu.git/commitdiff/94f1b474478ce0ede
> but didn't get round to sending it out again because of travel.
I can just pull it from this link, if you don't mind.
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it
2023-11-22 23:49 ` Volodymyr Babchuk
@ 2023-11-22 23:55 ` Woodhouse, David via
0 siblings, 0 replies; 52+ messages in thread
From: Woodhouse, David via @ 2023-11-22 23:55 UTC (permalink / raw)
To: Volodymyr_Babchuk@epam.com
Cc: julien@xen.org, hreitz@redhat.com, anthony.perard@citrix.com,
qemu-devel@nongnu.org, marcandre.lureau@redhat.com,
sstabellini@kernel.org, qemu-block@nongnu.org, paul@xen.org,
kwolf@redhat.com, xen-devel@lists.xenproject.org,
pbonzini@redhat.com, jasowang@redhat.com
[-- Attachment #1.1: Type: text/plain, Size: 147 bytes --]
On Wed, 2023-11-22 at 23:49 +0000, Volodymyr Babchuk wrote:
>
> I can just pull it from this link, if you don't mind.
Please do; thank you!
[-- Attachment #1.2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5957 bytes --]
[-- Attachment #2.1: Type: text/plain, Size: 215 bytes --]
Amazon Development Centre (London) Ltd. Registered in England and Wales with registration number 04543232 with its registered office at 1 Principal Place, Worship Street, London EC2A 2FA, United Kingdom.
[-- Attachment #2.2: Type: text/html, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it
2023-11-22 17:05 ` Paul Durrant
2023-11-22 22:44 ` Woodhouse, David via
@ 2023-11-22 22:56 ` Volodymyr Babchuk
2023-11-22 23:04 ` David Woodhouse
1 sibling, 1 reply; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-22 22:56 UTC (permalink / raw)
To: paul@xen.org
Cc: qemu-devel@nongnu.org, David Woodhouse, Stefano Stabellini,
Julien Grall, Anthony Perard, Kevin Wolf, Hanna Reitz,
Marc-André Lureau, Paolo Bonzini, Jason Wang,
open list:Block layer core, xen-devel@lists.xenproject.org
Paul Durrant <xadimgnik@gmail.com> writes:
> On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>> From: David Woodhouse <dwmw@amazon.co.uk>
>> This allows a XenDevice implementation to know whether it was
>> created
>> by QEMU, or merely discovered in XenStore after the toolstack created
>> it. This will allow us to create frontend/backend nodes only when we
>> should, rather than unconditionally attempting to overwrite them from
>> a driver domain which doesn't have privileges to do so.
>> As an added benefit, it also means we no longer have to call the
>> xen_backend_set_device() function from the device models immediately
>> after calling qdev_realize_and_unref(). Even though we could make
>> the argument that it's safe to do so, and the pointer to the unreffed
>> device *will* actually still be valid, it still made my skin itch to
>> look at it.
>> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
>> ---
>> hw/block/xen-block.c | 3 +--
>> hw/char/xen_console.c | 2 +-
>> hw/net/xen_nic.c | 2 +-
>> hw/xen/xen-bus.c | 4 ++++
>> include/hw/xen/xen-backend.h | 2 --
>> include/hw/xen/xen-bus.h | 2 ++
>> 6 files changed, 9 insertions(+), 6 deletions(-)
>>
>
> Actually, I think you should probably update
> xen_backend_try_device_destroy() in this patch too. It currently uses
> xen_backend_list_find() to check whether the specified XenDevice has
> an associated XenBackendInstance.
Sure. Looks like it is the only user of xen_backend_list_find(), so we
can get rid of it as well.
I'll drop your R-b tag, because of those additional changes in the new
version.
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it
2023-11-22 22:56 ` Volodymyr Babchuk
@ 2023-11-22 23:04 ` David Woodhouse
2023-11-23 9:29 ` Paul Durrant
0 siblings, 1 reply; 52+ messages in thread
From: David Woodhouse @ 2023-11-22 23:04 UTC (permalink / raw)
To: Volodymyr Babchuk, paul@xen.org
Cc: qemu-devel@nongnu.org, Stefano Stabellini, Julien Grall,
Anthony Perard, Kevin Wolf, Hanna Reitz, Marc-André Lureau,
Paolo Bonzini, Jason Wang, open list:Block layer core,
xen-devel@lists.xenproject.org
[-- Attachment #1: Type: text/plain, Size: 1942 bytes --]
On Wed, 2023-11-22 at 22:56 +0000, Volodymyr Babchuk wrote:
>
>
> Paul Durrant <xadimgnik@gmail.com> writes:
>
> > On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> > > From: David Woodhouse <dwmw@amazon.co.uk>
> > > This allows a XenDevice implementation to know whether it was
> > > created
> > > by QEMU, or merely discovered in XenStore after the toolstack created
> > > it. This will allow us to create frontend/backend nodes only when we
> > > should, rather than unconditionally attempting to overwrite them from
> > > a driver domain which doesn't have privileges to do so.
> > > As an added benefit, it also means we no longer have to call the
> > > xen_backend_set_device() function from the device models immediately
> > > after calling qdev_realize_and_unref(). Even though we could make
> > > the argument that it's safe to do so, and the pointer to the unreffed
> > > device *will* actually still be valid, it still made my skin itch to
> > > look at it.
> > > Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> > > ---
> > > hw/block/xen-block.c | 3 +--
> > > hw/char/xen_console.c | 2 +-
> > > hw/net/xen_nic.c | 2 +-
> > > hw/xen/xen-bus.c | 4 ++++
> > > include/hw/xen/xen-backend.h | 2 --
> > > include/hw/xen/xen-bus.h | 2 ++
> > > 6 files changed, 9 insertions(+), 6 deletions(-)
> > >
> >
> > Actually, I think you should probably update
> > xen_backend_try_device_destroy() in this patch too. It currently uses
> > xen_backend_list_find() to check whether the specified XenDevice has
> > an associated XenBackendInstance.
>
> Sure. Looks like it is the only user of xen_backend_list_find(), so we
> can get rid of it as well.
>
> I'll drop your R-b tag, because of those additional changes in the new
> version.
I think it's fine to keep. He told me to do it :)
[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5965 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it
2023-11-22 23:04 ` David Woodhouse
@ 2023-11-23 9:29 ` Paul Durrant
0 siblings, 0 replies; 52+ messages in thread
From: Paul Durrant @ 2023-11-23 9:29 UTC (permalink / raw)
To: David Woodhouse, Volodymyr Babchuk
Cc: qemu-devel@nongnu.org, Stefano Stabellini, Julien Grall,
Anthony Perard, Kevin Wolf, Hanna Reitz, Marc-André Lureau,
Paolo Bonzini, Jason Wang, open list:Block layer core,
xen-devel@lists.xenproject.org
On 22/11/2023 23:04, David Woodhouse wrote:
> On Wed, 2023-11-22 at 22:56 +0000, Volodymyr Babchuk wrote:
>>
>>
>> Paul Durrant <xadimgnik@gmail.com> writes:
>>
>>> On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>>>> From: David Woodhouse <dwmw@amazon.co.uk>
>>>> This allows a XenDevice implementation to know whether it was
>>>> created
>>>> by QEMU, or merely discovered in XenStore after the toolstack created
>>>> it. This will allow us to create frontend/backend nodes only when we
>>>> should, rather than unconditionally attempting to overwrite them from
>>>> a driver domain which doesn't have privileges to do so.
>>>> As an added benefit, it also means we no longer have to call the
>>>> xen_backend_set_device() function from the device models immediately
>>>> after calling qdev_realize_and_unref(). Even though we could make
>>>> the argument that it's safe to do so, and the pointer to the unreffed
>>>> device *will* actually still be valid, it still made my skin itch to
>>>> look at it.
>>>> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
>>>> ---
>>>> hw/block/xen-block.c | 3 +--
>>>> hw/char/xen_console.c | 2 +-
>>>> hw/net/xen_nic.c | 2 +-
>>>> hw/xen/xen-bus.c | 4 ++++
>>>> include/hw/xen/xen-backend.h | 2 --
>>>> include/hw/xen/xen-bus.h | 2 ++
>>>> 6 files changed, 9 insertions(+), 6 deletions(-)
>>>>
>>>
>>> Actually, I think you should probably update
>>> xen_backend_try_device_destroy() in this patch too. It currently uses
>>> xen_backend_list_find() to check whether the specified XenDevice has
>>> an associated XenBackendInstance.
>>
>> Sure. Looks like it is the only user of xen_backend_list_find(), so we
>> can get rid of it as well.
>>
>> I'll drop your R-b tag, because of those additional changes in the new
>> version.
>
> I think it's fine to keep. He told me to do it :)
I confirm that :-)
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH v2 3/6] xen: xenstore: add possibility to preserve owner
2023-11-21 22:10 [PATCH v2 0/6] xen-arm: add support for virtio-pci Volodymyr Babchuk
2023-11-21 22:10 ` [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it Volodymyr Babchuk
@ 2023-11-21 22:10 ` Volodymyr Babchuk
2023-11-22 17:07 ` Paul Durrant
2023-11-22 23:01 ` David Woodhouse
2023-11-21 22:10 ` [PATCH v2 2/6] xen: backends: touch some XenStore nodes only if device Volodymyr Babchuk
` (3 subsequent siblings)
5 siblings, 2 replies; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-21 22:10 UTC (permalink / raw)
To: qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall,
Volodymyr Babchuk, David Woodhouse, Paul Durrant,
Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Anthony Perard,
open list:X86 Xen CPUs
Add option to preserve owner when creating an entry in Xen Store. This
may be needed in cases when Qemu is working as device model in a
domain that is Domain-0, e.g. in driver domain.
"owner" parameter for qemu_xen_xs_create() function can have special
value XS_PRESERVE_OWNER, which will make specific implementation to
get original owner of an entry and pass it back to
set_permissions() call.
Please note, that XenStore inherits permissions, so even if entry is
newly created by, it already has the owner set to match owner of entry
at previous level.
Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
--
In v2:
- Pass transaction to xs_get_permissions() in libxenstore_create()
- Added comment before XS_PRESERVE_OWNER defintion
- Extended the commit message
---
hw/i386/kvm/xen_xenstore.c | 18 ++++++++++++++++++
hw/xen/xen-operations.c | 12 ++++++++++++
include/hw/xen/xen_backend_ops.h | 7 +++++++
3 files changed, 37 insertions(+)
diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 6e651960b3..d0fd5d4681 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -1595,6 +1595,24 @@ static bool xs_be_create(struct qemu_xs_handle *h, xs_transaction_t t,
return false;
}
+ if (owner == XS_PRESERVE_OWNER) {
+ GList *prev_perms;
+ char letter;
+
+ err = xs_impl_get_perms(h->impl, 0, t, path, &prev_perms);
+ if (err) {
+ errno = err;
+ return false;
+ }
+
+ if (sscanf(prev_perms->data, "%c%u", &letter, &owner) != 2) {
+ errno = EFAULT;
+ g_list_free_full(prev_perms, g_free);
+ return false;
+ }
+ g_list_free_full(prev_perms, g_free);
+ }
+
perms_list = g_list_append(perms_list,
xs_perm_as_string(XS_PERM_NONE, owner));
perms_list = g_list_append(perms_list,
diff --git a/hw/xen/xen-operations.c b/hw/xen/xen-operations.c
index e00983ec44..ae8265635f 100644
--- a/hw/xen/xen-operations.c
+++ b/hw/xen/xen-operations.c
@@ -300,6 +300,18 @@ static bool libxenstore_create(struct qemu_xs_handle *h, xs_transaction_t t,
return false;
}
+ if (owner == XS_PRESERVE_OWNER) {
+ struct xs_permissions *tmp;
+ unsigned int num;
+
+ tmp = xs_get_permissions(h->xsh, t, path, &num);
+ if (tmp == NULL) {
+ return false;
+ }
+ perms_list[0].id = tmp[0].id;
+ free(tmp);
+ }
+
return xs_set_permissions(h->xsh, t, path, perms_list,
ARRAY_SIZE(perms_list));
}
diff --git a/include/hw/xen/xen_backend_ops.h b/include/hw/xen/xen_backend_ops.h
index 90cca85f52..79021538a3 100644
--- a/include/hw/xen/xen_backend_ops.h
+++ b/include/hw/xen/xen_backend_ops.h
@@ -266,6 +266,13 @@ typedef uint32_t xs_transaction_t;
#define XS_PERM_READ 0x01
#define XS_PERM_WRITE 0x02
+/*
+ * This is QEMU-specific special value used only by QEMU wrappers
+ * around XenStore. It can be passed to qemu_xen_xs_create() to
+ * inherit owner value from higher-level XS entry.
+ */
+#define XS_PRESERVE_OWNER 0xFFFE
+
struct xenstore_backend_ops {
struct qemu_xs_handle *(*open)(void);
void (*close)(struct qemu_xs_handle *h);
--
2.42.0
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [PATCH v2 3/6] xen: xenstore: add possibility to preserve owner
2023-11-21 22:10 ` [PATCH v2 3/6] xen: xenstore: add possibility to preserve owner Volodymyr Babchuk
@ 2023-11-22 17:07 ` Paul Durrant
2023-11-22 22:28 ` Stefano Stabellini
2023-11-22 23:01 ` David Woodhouse
1 sibling, 1 reply; 52+ messages in thread
From: Paul Durrant @ 2023-11-22 17:07 UTC (permalink / raw)
To: Volodymyr Babchuk, qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall,
David Woodhouse, Michael S. Tsirkin, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost, Anthony Perard,
open list:X86 Xen CPUs
On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> Add option to preserve owner when creating an entry in Xen Store. This
> may be needed in cases when Qemu is working as device model in a
*may* be needed?
I don't undertstand why this patch is necessary and the commit comment
isn't really helping me.
> domain that is Domain-0, e.g. in driver domain.
>
> "owner" parameter for qemu_xen_xs_create() function can have special
> value XS_PRESERVE_OWNER, which will make specific implementation to
> get original owner of an entry and pass it back to
> set_permissions() call.
>
> Please note, that XenStore inherits permissions, so even if entry is
> newly created by, it already has the owner set to match owner of entry
> at previous level.
>
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 3/6] xen: xenstore: add possibility to preserve owner
2023-11-22 17:07 ` Paul Durrant
@ 2023-11-22 22:28 ` Stefano Stabellini
0 siblings, 0 replies; 52+ messages in thread
From: Stefano Stabellini @ 2023-11-22 22:28 UTC (permalink / raw)
To: paul
Cc: Volodymyr Babchuk, qemu-devel@nongnu.org, David Woodhouse,
Stefano Stabellini, Julien Grall, David Woodhouse,
Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Anthony Perard,
open list:X86 Xen CPUs
On Wed, 22 Nov 2023, Paul Durrant wrote:
> On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> > Add option to preserve owner when creating an entry in Xen Store. This
> > may be needed in cases when Qemu is working as device model in a
>
> *may* be needed?
>
> I don't undertstand why this patch is necessary and the commit comment isn't
> really helping me.
>
> > domain that is Domain-0, e.g. in driver domain.
I think Volodymyr meant:
domain that is *not* Domain-0
So I am guessing this patch is needed otherwise QEMU will run into
permissions errors doing xenstore operations
> > "owner" parameter for qemu_xen_xs_create() function can have special
> > value XS_PRESERVE_OWNER, which will make specific implementation to
> > get original owner of an entry and pass it back to
> > set_permissions() call.
> >
> > Please note, that XenStore inherits permissions, so even if entry is
> > newly created by, it already has the owner set to match owner of entry
> > at previous level.
> >
> > Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 3/6] xen: xenstore: add possibility to preserve owner
2023-11-21 22:10 ` [PATCH v2 3/6] xen: xenstore: add possibility to preserve owner Volodymyr Babchuk
2023-11-22 17:07 ` Paul Durrant
@ 2023-11-22 23:01 ` David Woodhouse
2023-11-22 23:03 ` Volodymyr Babchuk
2023-11-23 1:19 ` Volodymyr Babchuk
1 sibling, 2 replies; 52+ messages in thread
From: David Woodhouse @ 2023-11-22 23:01 UTC (permalink / raw)
To: Volodymyr Babchuk, qemu-devel@nongnu.org
Cc: Stefano Stabellini, Julien Grall, Paul Durrant,
Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Anthony Perard,
open list:X86 Xen CPUs
[-- Attachment #1: Type: text/plain, Size: 930 bytes --]
On Tue, 2023-11-21 at 22:10 +0000, Volodymyr Babchuk wrote:
>
> --- a/hw/xen/xen-operations.c
> +++ b/hw/xen/xen-operations.c
> @@ -300,6 +300,18 @@ static bool libxenstore_create(struct qemu_xs_handle *h, xs_transaction_t t,
> return false;
> }
>
> + if (owner == XS_PRESERVE_OWNER) {
> + struct xs_permissions *tmp;
> + unsigned int num;
> +
> + tmp = xs_get_permissions(h->xsh, t, path, &num);
> + if (tmp == NULL) {
> + return false;
> + }
> + perms_list[0].id = tmp[0].id;
> + free(tmp);
> + }
> +
> return xs_set_permissions(h->xsh, t, path, perms_list,
> ARRAY_SIZE(perms_list));
> }
If the existing transaction is XBT_NULL I think you want to create a
new transaction for it, don't you?
[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5965 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 3/6] xen: xenstore: add possibility to preserve owner
2023-11-22 23:01 ` David Woodhouse
@ 2023-11-22 23:03 ` Volodymyr Babchuk
2023-11-23 1:19 ` Volodymyr Babchuk
1 sibling, 0 replies; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-22 23:03 UTC (permalink / raw)
To: David Woodhouse
Cc: qemu-devel@nongnu.org, Stefano Stabellini, Julien Grall,
Paul Durrant, Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Anthony Perard,
open list:X86 Xen CPUs
Hi David,
David Woodhouse <dwmw2@infradead.org> writes:
> [[S/MIME Signed Part:Undecided]]
> On Tue, 2023-11-21 at 22:10 +0000, Volodymyr Babchuk wrote:
>>
>> --- a/hw/xen/xen-operations.c
>> +++ b/hw/xen/xen-operations.c
>> @@ -300,6 +300,18 @@ static bool libxenstore_create(struct qemu_xs_handle *h, xs_transaction_t t,
>> return false;
>> }
>>
>> + if (owner == XS_PRESERVE_OWNER) {
>> + struct xs_permissions *tmp;
>> + unsigned int num;
>> +
>> + tmp = xs_get_permissions(h->xsh, t, path, &num);
>> + if (tmp == NULL) {
>> + return false;
>> + }
>> + perms_list[0].id = tmp[0].id;
>> + free(tmp);
>> + }
>> +
>> return xs_set_permissions(h->xsh, t, path, perms_list,
>> ARRAY_SIZE(perms_list));
>> }
>
> If the existing transaction is XBT_NULL I think you want to create a
> new transaction for it, don't you?
As per Stefano's and Paul's comments I'll drop this patch
completely. Thanks for review, thought.
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 3/6] xen: xenstore: add possibility to preserve owner
2023-11-22 23:01 ` David Woodhouse
2023-11-22 23:03 ` Volodymyr Babchuk
@ 2023-11-23 1:19 ` Volodymyr Babchuk
1 sibling, 0 replies; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-23 1:19 UTC (permalink / raw)
To: David Woodhouse
Cc: qemu-devel@nongnu.org, Stefano Stabellini, Julien Grall,
Paul Durrant, Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Anthony Perard,
open list:X86 Xen CPUs
Hi David,
David Woodhouse <dwmw2@infradead.org> writes:
> [[S/MIME Signed Part:Undecided]]
> On Tue, 2023-11-21 at 22:10 +0000, Volodymyr Babchuk wrote:
>>
>> --- a/hw/xen/xen-operations.c
>> +++ b/hw/xen/xen-operations.c
>> @@ -300,6 +300,18 @@ static bool libxenstore_create(struct qemu_xs_handle *h, xs_transaction_t t,
>> return false;
>> }
>>
>> + if (owner == XS_PRESERVE_OWNER) {
>> + struct xs_permissions *tmp;
>> + unsigned int num;
>> +
>> + tmp = xs_get_permissions(h->xsh, t, path, &num);
>> + if (tmp == NULL) {
>> + return false;
>> + }
>> + perms_list[0].id = tmp[0].id;
>> + free(tmp);
>> + }
>> +
>> return xs_set_permissions(h->xsh, t, path, perms_list,
>> ARRAY_SIZE(perms_list));
>> }
>
> If the existing transaction is XBT_NULL I think you want to create a
> new transaction for it, don't you?
I must say that your comment is valid even without my
changes. xenstore_mkdir() calls qemu_xen_xs_create, providing just plain
"0" (not even XBT_NULL) as a transaction, but actual xenstore interface
implementation, like xs_be_create(), issue multiple calls to lower
layer, passing "t" downwards. For example, xs_be_create() calls
xs_impl_read, xs_impl_write and xs_impl_set_perms(). If called from
xesntore_mkdir(), those three operations will be non-atomic. I don't
know if this can lead to a problem, because apparently it was so for a
long time...
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH v2 2/6] xen: backends: touch some XenStore nodes only if device...
2023-11-21 22:10 [PATCH v2 0/6] xen-arm: add support for virtio-pci Volodymyr Babchuk
2023-11-21 22:10 ` [PATCH v2 1/6] hw/xen: Set XenBackendInstance in the XenDevice before realizing it Volodymyr Babchuk
2023-11-21 22:10 ` [PATCH v2 3/6] xen: xenstore: add possibility to preserve owner Volodymyr Babchuk
@ 2023-11-21 22:10 ` Volodymyr Babchuk
2023-11-22 11:07 ` Philippe Mathieu-Daudé
2023-11-22 17:03 ` Paul Durrant
2023-11-21 22:10 ` [PATCH v2 5/6] xen_arm: Set mc->max_cpus to GUEST_MAX_VCPUS in xen_arm_init() Volodymyr Babchuk
` (2 subsequent siblings)
5 siblings, 2 replies; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-21 22:10 UTC (permalink / raw)
To: qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall,
Volodymyr Babchuk, Paul Durrant, Oleksandr Tyshchenko,
Anthony Perard, Paul Durrant, Kevin Wolf, Hanna Reitz,
Marc-André Lureau, Paolo Bonzini, Jason Wang,
open list:X86 Xen CPUs, open list:Block layer core
was created by QEMU
Xen PV devices in QEMU can be created in two ways: either by QEMU
itself, if they were passed via command line, or by Xen toolstack. In
the latter case, QEMU scans XenStore entries and configures devices
accordingly.
In the second case we don't want QEMU to write/delete front-end
entries for two reasons: it might have no access to those entries if
it is running in un-privileged domain and it is just incorrect to
overwrite entries already provided by Xen toolstack, because toolstack
manages those nodes. For example, it might read backend- or frontend-
state to be sure that they are both disconnected and it is safe to
destroy a domain.
This patch checks presence of xendev->backend to check if Xen PV
device is acting as a backend (i.e. it was configured by Xen
toolstack) to decide if it should touch frontend entries in XenStore.
Also, when we need to remove XenStore entries during device teardown
only if they weren't created by Xen toolstack. If they were created by
toolstack, then it is toolstack's job to do proper clean-up.
Suggested-by: Paul Durrant <xadimgnik@gmail.com>
Suggested-by: David Woodhouse <dwmw@amazon.co.uk>
Co-Authored-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
hw/block/xen-block.c | 16 +++++++++-------
hw/char/xen_console.c | 2 +-
hw/net/xen_nic.c | 18 ++++++++++--------
hw/xen/xen-bus.c | 14 +++++++++-----
4 files changed, 29 insertions(+), 21 deletions(-)
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index c2ac9db4a2..dac519a6d3 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -390,13 +390,15 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
xen_device_backend_printf(xendev, "info", "%u", blockdev->info);
- xen_device_frontend_printf(xendev, "virtual-device", "%lu",
- vdev->number);
- xen_device_frontend_printf(xendev, "device-type", "%s",
- blockdev->device_type);
-
- xen_device_backend_printf(xendev, "sector-size", "%u",
- conf->logical_block_size);
+ if (!xendev->backend) {
+ xen_device_frontend_printf(xendev, "virtual-device", "%lu",
+ vdev->number);
+ xen_device_frontend_printf(xendev, "device-type", "%s",
+ blockdev->device_type);
+
+ xen_device_backend_printf(xendev, "sector-size", "%u",
+ conf->logical_block_size);
+ }
xen_block_set_size(blockdev);
diff --git a/hw/char/xen_console.c b/hw/char/xen_console.c
index bef8a3a621..b52ddddabf 100644
--- a/hw/char/xen_console.c
+++ b/hw/char/xen_console.c
@@ -450,7 +450,7 @@ static void xen_console_realize(XenDevice *xendev, Error **errp)
trace_xen_console_realize(con->dev, object_get_typename(OBJECT(cs)));
- if (CHARDEV_IS_PTY(cs)) {
+ if (CHARDEV_IS_PTY(cs) && !xendev->backend) {
/* Strip the leading 'pty:' */
xen_device_frontend_printf(xendev, "tty", "%s", cs->filename + 4);
}
diff --git a/hw/net/xen_nic.c b/hw/net/xen_nic.c
index afa10c96e8..27442bef38 100644
--- a/hw/net/xen_nic.c
+++ b/hw/net/xen_nic.c
@@ -315,14 +315,16 @@ static void xen_netdev_realize(XenDevice *xendev, Error **errp)
qemu_macaddr_default_if_unset(&netdev->conf.macaddr);
- xen_device_frontend_printf(xendev, "mac", "%02x:%02x:%02x:%02x:%02x:%02x",
- netdev->conf.macaddr.a[0],
- netdev->conf.macaddr.a[1],
- netdev->conf.macaddr.a[2],
- netdev->conf.macaddr.a[3],
- netdev->conf.macaddr.a[4],
- netdev->conf.macaddr.a[5]);
-
+ if (!xendev->backend) {
+ xen_device_frontend_printf(xendev, "mac",
+ "%02x:%02x:%02x:%02x:%02x:%02x",
+ netdev->conf.macaddr.a[0],
+ netdev->conf.macaddr.a[1],
+ netdev->conf.macaddr.a[2],
+ netdev->conf.macaddr.a[3],
+ netdev->conf.macaddr.a[4],
+ netdev->conf.macaddr.a[5]);
+ }
netdev->nic = qemu_new_nic(&net_xen_info, &netdev->conf,
object_get_typename(OBJECT(xendev)),
DEVICE(xendev)->id, netdev);
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index dd0171ab98..d0f17aeb27 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -599,8 +599,10 @@ static void xen_device_backend_destroy(XenDevice *xendev)
g_assert(xenbus->xsh);
- xs_node_destroy(xenbus->xsh, XBT_NULL, xendev->backend_path,
- &local_err);
+ if (!xendev->backend) {
+ xs_node_destroy(xenbus->xsh, XBT_NULL, xendev->backend_path,
+ &local_err);
+ }
g_free(xendev->backend_path);
xendev->backend_path = NULL;
@@ -764,8 +766,10 @@ static void xen_device_frontend_destroy(XenDevice *xendev)
g_assert(xenbus->xsh);
- xs_node_destroy(xenbus->xsh, XBT_NULL, xendev->frontend_path,
- &local_err);
+ if (!xendev->backend) {
+ xs_node_destroy(xenbus->xsh, XBT_NULL, xendev->frontend_path,
+ &local_err);
+ }
g_free(xendev->frontend_path);
xendev->frontend_path = NULL;
@@ -1063,7 +1067,7 @@ static void xen_device_realize(DeviceState *dev, Error **errp)
xen_device_backend_set_online(xendev, true);
xen_device_backend_set_state(xendev, XenbusStateInitWait);
- if (!xen_device_frontend_exists(xendev)) {
+ if (!xen_device_frontend_exists(xendev) && !xendev->backend) {
xen_device_frontend_printf(xendev, "backend", "%s",
xendev->backend_path);
xen_device_frontend_printf(xendev, "backend-id", "%u",
--
2.42.0
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [PATCH v2 2/6] xen: backends: touch some XenStore nodes only if device...
2023-11-21 22:10 ` [PATCH v2 2/6] xen: backends: touch some XenStore nodes only if device Volodymyr Babchuk
@ 2023-11-22 11:07 ` Philippe Mathieu-Daudé
2023-11-22 22:49 ` Volodymyr Babchuk
2023-11-22 17:03 ` Paul Durrant
1 sibling, 1 reply; 52+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-11-22 11:07 UTC (permalink / raw)
To: Volodymyr Babchuk, qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall, Paul Durrant,
Oleksandr Tyshchenko, Anthony Perard, Paul Durrant, Kevin Wolf,
Hanna Reitz, Marc-André Lureau, Paolo Bonzini, Jason Wang,
open list:X86 Xen CPUs, open list:Block layer core
Hi Volodymyr,
On 21/11/23 23:10, Volodymyr Babchuk wrote:
> was created by QEMU
Please do not split lines between subject and content. Rewrite the
full line. Preferably restrict the subject to 72 chars. Otherwise
your patch isn't displayed correctly in git tools.
Thanks,
Phil.
> Xen PV devices in QEMU can be created in two ways: either by QEMU
> itself, if they were passed via command line, or by Xen toolstack. In
> the latter case, QEMU scans XenStore entries and configures devices
> accordingly.
>
> In the second case we don't want QEMU to write/delete front-end
> entries for two reasons: it might have no access to those entries if
> it is running in un-privileged domain and it is just incorrect to
> overwrite entries already provided by Xen toolstack, because toolstack
> manages those nodes. For example, it might read backend- or frontend-
> state to be sure that they are both disconnected and it is safe to
> destroy a domain.
>
> This patch checks presence of xendev->backend to check if Xen PV
> device is acting as a backend (i.e. it was configured by Xen
> toolstack) to decide if it should touch frontend entries in XenStore.
> Also, when we need to remove XenStore entries during device teardown
> only if they weren't created by Xen toolstack. If they were created by
> toolstack, then it is toolstack's job to do proper clean-up.
>
> Suggested-by: Paul Durrant <xadimgnik@gmail.com>
> Suggested-by: David Woodhouse <dwmw@amazon.co.uk>
> Co-Authored-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> ---
> hw/block/xen-block.c | 16 +++++++++-------
> hw/char/xen_console.c | 2 +-
> hw/net/xen_nic.c | 18 ++++++++++--------
> hw/xen/xen-bus.c | 14 +++++++++-----
> 4 files changed, 29 insertions(+), 21 deletions(-)
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 2/6] xen: backends: touch some XenStore nodes only if device...
2023-11-22 11:07 ` Philippe Mathieu-Daudé
@ 2023-11-22 22:49 ` Volodymyr Babchuk
2023-11-22 23:19 ` David Woodhouse
0 siblings, 1 reply; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-22 22:49 UTC (permalink / raw)
To: Philippe Mathieu-Daudé
Cc: qemu-devel@nongnu.org, David Woodhouse, Stefano Stabellini,
Julien Grall, Paul Durrant, Oleksandr Tyshchenko, Anthony Perard,
Paul Durrant, Kevin Wolf, Hanna Reitz, Marc-André Lureau,
Paolo Bonzini, Jason Wang, open list:Block layer core,
xen-devel@lists.xenproject.org
Hi Philippe,
Philippe Mathieu-Daudé <philmd@linaro.org> writes:
> Hi Volodymyr,
>
> On 21/11/23 23:10, Volodymyr Babchuk wrote:
>> was created by QEMU
>
> Please do not split lines between subject and content. Rewrite the
> full line. Preferably restrict the subject to 72 chars.
I tried to come with shorter description, but failed. I'll rewrite it in
the next version.
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 2/6] xen: backends: touch some XenStore nodes only if device...
2023-11-22 22:49 ` Volodymyr Babchuk
@ 2023-11-22 23:19 ` David Woodhouse
0 siblings, 0 replies; 52+ messages in thread
From: David Woodhouse @ 2023-11-22 23:19 UTC (permalink / raw)
To: Volodymyr Babchuk, Philippe Mathieu-Daudé
Cc: qemu-devel@nongnu.org, Stefano Stabellini, Julien Grall,
Paul Durrant, Oleksandr Tyshchenko, Anthony Perard, Paul Durrant,
Kevin Wolf, Hanna Reitz, Marc-André Lureau, Paolo Bonzini,
Jason Wang, open list:Block layer core,
xen-devel@lists.xenproject.org
[-- Attachment #1: Type: text/plain, Size: 705 bytes --]
On Wed, 2023-11-22 at 22:49 +0000, Volodymyr Babchuk wrote:
>
> > On 21/11/23 23:10, Volodymyr Babchuk wrote:
> > > was created by QEMU
> >
> > Please do not split lines between subject and content. Rewrite the
> > full line. Preferably restrict the subject to 72 chars.
>
> I tried to come with shorter description, but failed. I'll rewrite it in
> the next version.
Even if you just put those last four words *into* the subject, it's
only 75 characters once the leaving [PATCH...] is stripped.
xen: backends: touch some XenStore nodes only if device was created by QEMU
And this would be 9 characters fewer:
xen: backends: don't overwrite XenStore nodes created by toolstack
[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5965 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 2/6] xen: backends: touch some XenStore nodes only if device...
2023-11-21 22:10 ` [PATCH v2 2/6] xen: backends: touch some XenStore nodes only if device Volodymyr Babchuk
2023-11-22 11:07 ` Philippe Mathieu-Daudé
@ 2023-11-22 17:03 ` Paul Durrant
2023-11-22 22:46 ` Woodhouse, David via
2023-11-22 22:50 ` Volodymyr Babchuk
1 sibling, 2 replies; 52+ messages in thread
From: Paul Durrant @ 2023-11-22 17:03 UTC (permalink / raw)
To: Volodymyr Babchuk, qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall,
Oleksandr Tyshchenko, Anthony Perard, Paul Durrant, Kevin Wolf,
Hanna Reitz, Marc-André Lureau, Paolo Bonzini, Jason Wang,
open list:X86 Xen CPUs, open list:Block layer core
On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> was created by QEMU
>
> Xen PV devices in QEMU can be created in two ways: either by QEMU
> itself, if they were passed via command line, or by Xen toolstack. In
> the latter case, QEMU scans XenStore entries and configures devices
> accordingly.
>
> In the second case we don't want QEMU to write/delete front-end
> entries for two reasons: it might have no access to those entries if
> it is running in un-privileged domain and it is just incorrect to
> overwrite entries already provided by Xen toolstack, because toolstack
> manages those nodes. For example, it might read backend- or frontend-
> state to be sure that they are both disconnected and it is safe to
> destroy a domain.
>
> This patch checks presence of xendev->backend to check if Xen PV
> device is acting as a backend (i.e. it was configured by Xen
Technally *all* XenDevice objects are backends.
> toolstack) to decide if it should touch frontend entries in XenStore.
> Also, when we need to remove XenStore entries during device teardown
> only if they weren't created by Xen toolstack. If they were created by
> toolstack, then it is toolstack's job to do proper clean-up.
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 2/6] xen: backends: touch some XenStore nodes only if device...
2023-11-22 17:03 ` Paul Durrant
@ 2023-11-22 22:46 ` Woodhouse, David via
2023-11-22 22:50 ` Volodymyr Babchuk
1 sibling, 0 replies; 52+ messages in thread
From: Woodhouse, David via @ 2023-11-22 22:46 UTC (permalink / raw)
To: Volodymyr_Babchuk@epam.com, qemu-devel@nongnu.org, paul@xen.org
Cc: julien@xen.org, hreitz@redhat.com, anthony.perard@citrix.com,
Oleksandr_Tyshchenko@epam.com, sstabellini@kernel.org,
qemu-block@nongnu.org, marcandre.lureau@redhat.com,
kwolf@redhat.com, xen-devel@lists.xenproject.org,
pbonzini@redhat.com, jasowang@redhat.com
[-- Attachment #1.1: Type: text/plain, Size: 385 bytes --]
On Wed, 2023-11-22 at 17:03 +0000, Paul Durrant wrote:
>
> > This patch checks presence of xendev->backend to check if Xen PV
> > device is acting as a backend (i.e. it was configured by Xen
>
> Technally *all* XenDevice objects are backends.
Right. The key criterion is whether the backend was created by QEMU, or
merely discovered by QEMU after the toolstack created it.
[-- Attachment #1.2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5957 bytes --]
[-- Attachment #2.1: Type: text/plain, Size: 215 bytes --]
Amazon Development Centre (London) Ltd. Registered in England and Wales with registration number 04543232 with its registered office at 1 Principal Place, Worship Street, London EC2A 2FA, United Kingdom.
[-- Attachment #2.2: Type: text/html, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 2/6] xen: backends: touch some XenStore nodes only if device...
2023-11-22 17:03 ` Paul Durrant
2023-11-22 22:46 ` Woodhouse, David via
@ 2023-11-22 22:50 ` Volodymyr Babchuk
1 sibling, 0 replies; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-22 22:50 UTC (permalink / raw)
To: paul@xen.org
Cc: qemu-devel@nongnu.org, David Woodhouse, Stefano Stabellini,
Julien Grall, Oleksandr Tyshchenko, Anthony Perard, Kevin Wolf,
Hanna Reitz, Marc-André Lureau, Paolo Bonzini, Jason Wang,
open list:X86 Xen CPUs, open list:Block layer core
Hi Paul,
Paul Durrant <xadimgnik@gmail.com> writes:
> On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>> was created by QEMU
>> Xen PV devices in QEMU can be created in two ways: either by QEMU
>> itself, if they were passed via command line, or by Xen toolstack. In
>> the latter case, QEMU scans XenStore entries and configures devices
>> accordingly.
>> In the second case we don't want QEMU to write/delete front-end
>> entries for two reasons: it might have no access to those entries if
>> it is running in un-privileged domain and it is just incorrect to
>> overwrite entries already provided by Xen toolstack, because toolstack
>> manages those nodes. For example, it might read backend- or frontend-
>> state to be sure that they are both disconnected and it is safe to
>> destroy a domain.
>> This patch checks presence of xendev->backend to check if Xen PV
>> device is acting as a backend (i.e. it was configured by Xen
>
> Technally *all* XenDevice objects are backends.
>
Yes, you are right of course. I'll rephrase this paragraph in the next
version.
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH v2 5/6] xen_arm: Set mc->max_cpus to GUEST_MAX_VCPUS in xen_arm_init()
2023-11-21 22:10 [PATCH v2 0/6] xen-arm: add support for virtio-pci Volodymyr Babchuk
` (2 preceding siblings ...)
2023-11-21 22:10 ` [PATCH v2 2/6] xen: backends: touch some XenStore nodes only if device Volodymyr Babchuk
@ 2023-11-21 22:10 ` Volodymyr Babchuk
2023-11-22 11:10 ` Philippe Mathieu-Daudé
2023-11-24 12:04 ` Igor Mammedov
2023-11-21 22:10 ` [PATCH v2 6/6] xen_arm: Add virtual PCIe host bridge support Volodymyr Babchuk
2023-11-21 22:10 ` [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory Volodymyr Babchuk
5 siblings, 2 replies; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-21 22:10 UTC (permalink / raw)
To: qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall,
Oleksandr Tyshchenko, Volodymyr Babchuk, Peter Maydell,
open list:ARM TCG CPUs
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
The number of vCPUs used for the IOREQ configuration (machine->smp.cpus)
should really match the system value as for each vCPU we setup a dedicated
evtchn for the communication with Xen at the runtime. This is needed
for the IOREQ to be properly configured and work if the involved domain
has more than one vCPU assigned.
Set the number of current supported guest vCPUs here (128) which is
defined in public header arch-arm.h. And the toolstack should then
pass max_vcpus using "-smp" arg.
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
hw/arm/xen_arm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
index a5631529d0..b9c3ae14b6 100644
--- a/hw/arm/xen_arm.c
+++ b/hw/arm/xen_arm.c
@@ -231,7 +231,7 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
MachineClass *mc = MACHINE_CLASS(oc);
mc->desc = "Xen Para-virtualized PC";
mc->init = xen_arm_init;
- mc->max_cpus = 1;
+ mc->max_cpus = GUEST_MAX_VCPUS;
mc->default_machine_opts = "accel=xen";
/* Set explicitly here to make sure that real ram_size is passed */
mc->default_ram_size = 0;
--
2.42.0
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [PATCH v2 5/6] xen_arm: Set mc->max_cpus to GUEST_MAX_VCPUS in xen_arm_init()
2023-11-21 22:10 ` [PATCH v2 5/6] xen_arm: Set mc->max_cpus to GUEST_MAX_VCPUS in xen_arm_init() Volodymyr Babchuk
@ 2023-11-22 11:10 ` Philippe Mathieu-Daudé
2023-11-24 12:04 ` Igor Mammedov
1 sibling, 0 replies; 52+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-11-22 11:10 UTC (permalink / raw)
To: Volodymyr Babchuk, qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall,
Oleksandr Tyshchenko, Peter Maydell, open list:ARM TCG CPUs
On 21/11/23 23:10, Volodymyr Babchuk wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> The number of vCPUs used for the IOREQ configuration (machine->smp.cpus)
> should really match the system value as for each vCPU we setup a dedicated
> evtchn for the communication with Xen at the runtime. This is needed
> for the IOREQ to be properly configured and work if the involved domain
> has more than one vCPU assigned.
>
> Set the number of current supported guest vCPUs here (128) which is
> defined in public header arch-arm.h. And the toolstack should then
> pass max_vcpus using "-smp" arg.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> ---
> hw/arm/xen_arm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 5/6] xen_arm: Set mc->max_cpus to GUEST_MAX_VCPUS in xen_arm_init()
2023-11-21 22:10 ` [PATCH v2 5/6] xen_arm: Set mc->max_cpus to GUEST_MAX_VCPUS in xen_arm_init() Volodymyr Babchuk
2023-11-22 11:10 ` Philippe Mathieu-Daudé
@ 2023-11-24 12:04 ` Igor Mammedov
1 sibling, 0 replies; 52+ messages in thread
From: Igor Mammedov @ 2023-11-24 12:04 UTC (permalink / raw)
To: Volodymyr Babchuk
Cc: qemu-devel@nongnu.org, David Woodhouse, Stefano Stabellini,
Julien Grall, Oleksandr Tyshchenko, Peter Maydell,
open list:ARM TCG CPUs
On Tue, 21 Nov 2023 22:10:28 +0000
Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> wrote:
Probably typo in 'subj'
mc->max_cpus
is limit on maximum supported vCPUs and it shouldn't be set
by xen_arm_init()
patch itself though does the right thing by setting it
in xen_arm_machine_class_init()
Also below explanation, while valid is not the reason for
increasing mc->max_cpus.
Reason could be as simple as
'increase max vCPU limit for FOO machine to XXX'
otherwise machine creation would be aborted early by generic code
with error '...'
see machine_parse_smp_config()
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> The number of vCPUs used for the IOREQ configuration (machine->smp.cpus)
> should really match the system value as for each vCPU we setup a dedicated
> evtchn for the communication with Xen at the runtime. This is needed
> for the IOREQ to be properly configured and work if the involved domain
> has more than one vCPU assigned.
>
> Set the number of current supported guest vCPUs here (128) which is
> defined in public header arch-arm.h. And the toolstack should then
> pass max_vcpus using "-smp" arg.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> ---
> hw/arm/xen_arm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> index a5631529d0..b9c3ae14b6 100644
> --- a/hw/arm/xen_arm.c
> +++ b/hw/arm/xen_arm.c
> @@ -231,7 +231,7 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
> MachineClass *mc = MACHINE_CLASS(oc);
> mc->desc = "Xen Para-virtualized PC";
> mc->init = xen_arm_init;
> - mc->max_cpus = 1;
> + mc->max_cpus = GUEST_MAX_VCPUS;
> mc->default_machine_opts = "accel=xen";
> /* Set explicitly here to make sure that real ram_size is passed */
> mc->default_ram_size = 0;
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH v2 6/6] xen_arm: Add virtual PCIe host bridge support
2023-11-21 22:10 [PATCH v2 0/6] xen-arm: add support for virtio-pci Volodymyr Babchuk
` (3 preceding siblings ...)
2023-11-21 22:10 ` [PATCH v2 5/6] xen_arm: Set mc->max_cpus to GUEST_MAX_VCPUS in xen_arm_init() Volodymyr Babchuk
@ 2023-11-21 22:10 ` Volodymyr Babchuk
2023-11-22 22:39 ` Stefano Stabellini
2023-11-24 12:30 ` Igor Mammedov
2023-11-21 22:10 ` [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory Volodymyr Babchuk
5 siblings, 2 replies; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-21 22:10 UTC (permalink / raw)
To: qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall,
Oleksandr Tyshchenko, Volodymyr Babchuk, Peter Maydell,
Anthony Perard, Paul Durrant, open list:ARM TCG CPUs,
open list:X86 Xen CPUs
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
The bridge is needed for virtio-pci support, as QEMU can emulate the
whole bridge with any virtio-pci devices connected to it.
This patch provides a flexible way to configure PCIe brige resources
with xenstore. We made this for several reasons:
- We don't want to clash with vPCI devices, so we need information
from Xen toolstack on which PCI bus to use.
- The guest memory layout that describes these resources is not stable
and may vary between guests, so we cannot rely on static resources
to be always the same for both ends.
- Also the device-models which run in different domains and serve
virtio-pci devices for the same guest should use different host
bridge resources for Xen to distinguish. The rule for the guest
device-tree generation is one PCI host bridge per backend domain.
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
Changes from v1:
- Renamed virtio_pci_host to pcie_host entries in XenStore, because
there is nothing specific to virtio-pci: any PCI device can be
emulated via this newly created bridge.
---
hw/arm/xen_arm.c | 186 ++++++++++++++++++++++++++++++++++++
hw/xen/xen-hvm-common.c | 9 +-
include/hw/xen/xen_native.h | 8 +-
3 files changed, 200 insertions(+), 3 deletions(-)
diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
index b9c3ae14b6..d506d55d0f 100644
--- a/hw/arm/xen_arm.c
+++ b/hw/arm/xen_arm.c
@@ -22,6 +22,7 @@
*/
#include "qemu/osdep.h"
+#include "qemu/cutils.h"
#include "qemu/error-report.h"
#include "qapi/qapi-commands-migration.h"
#include "qapi/visitor.h"
@@ -34,6 +35,9 @@
#include "hw/xen/xen-hvm-common.h"
#include "sysemu/tpm.h"
#include "hw/xen/arch_hvm.h"
+#include "exec/address-spaces.h"
+#include "hw/pci-host/gpex.h"
+#include "hw/virtio/virtio-pci.h"
#define TYPE_XEN_ARM MACHINE_TYPE_NAME("xenpvh")
OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
@@ -58,6 +62,11 @@ struct XenArmState {
struct {
uint64_t tpm_base_addr;
} cfg;
+
+ MemMapEntry pcie_mmio;
+ MemMapEntry pcie_ecam;
+ MemMapEntry pcie_mmio_high;
+ int pcie_irq;
};
static MemoryRegion ram_lo, ram_hi;
@@ -73,6 +82,7 @@ static MemoryRegion ram_lo, ram_hi;
#define NR_VIRTIO_MMIO_DEVICES \
(GUEST_VIRTIO_MMIO_SPI_LAST - GUEST_VIRTIO_MMIO_SPI_FIRST)
+/* TODO It should be xendevicemodel_set_pci_intx_level() for PCI interrupts. */
static void xen_set_irq(void *opaque, int irq, int level)
{
if (xendevicemodel_set_irq_level(xen_dmod, xen_domid, irq, level)) {
@@ -129,6 +139,176 @@ static void xen_init_ram(MachineState *machine)
}
}
+static void xen_create_pcie(XenArmState *xam)
+{
+ MemoryRegion *mmio_alias, *mmio_alias_high, *mmio_reg;
+ MemoryRegion *ecam_alias, *ecam_reg;
+ DeviceState *dev;
+ int i;
+
+ dev = qdev_new(TYPE_GPEX_HOST);
+ sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
+
+ /* Map ECAM space */
+ ecam_alias = g_new0(MemoryRegion, 1);
+ ecam_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
+ memory_region_init_alias(ecam_alias, OBJECT(dev), "pcie-ecam",
+ ecam_reg, 0, xam->pcie_ecam.size);
+ memory_region_add_subregion(get_system_memory(), xam->pcie_ecam.base,
+ ecam_alias);
+
+ /* Map the MMIO space */
+ mmio_alias = g_new0(MemoryRegion, 1);
+ mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
+ memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
+ mmio_reg,
+ xam->pcie_mmio.base,
+ xam->pcie_mmio.size);
+ memory_region_add_subregion(get_system_memory(), xam->pcie_mmio.base,
+ mmio_alias);
+
+ /* Map the MMIO_HIGH space */
+ mmio_alias_high = g_new0(MemoryRegion, 1);
+ memory_region_init_alias(mmio_alias_high, OBJECT(dev), "pcie-mmio-high",
+ mmio_reg,
+ xam->pcie_mmio_high.base,
+ xam->pcie_mmio_high.size);
+ memory_region_add_subregion(get_system_memory(), xam->pcie_mmio_high.base,
+ mmio_alias_high);
+
+ /* Legacy PCI interrupts (#INTA - #INTD) */
+ for (i = 0; i < GPEX_NUM_IRQS; i++) {
+ qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
+ xam->pcie_irq + i);
+
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
+ gpex_set_irq_num(GPEX_HOST(dev), i, xam->pcie_irq + i);
+ }
+
+ DPRINTF("Created PCIe host bridge\n");
+}
+
+static bool xen_read_pcie_prop(XenArmState *xam, unsigned int xen_domid,
+ const char *prop_name, unsigned long *data)
+{
+ char path[128], *value = NULL;
+ unsigned int len;
+ bool ret = true;
+
+ snprintf(path, sizeof(path), "device-model/%d/pcie_host/%s",
+ xen_domid, prop_name);
+ value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
+
+ if (qemu_strtou64(value, NULL, 16, data)) {
+ error_report("xenpv: Failed to get 'pcie_host/%s' prop",
+ prop_name);
+ ret = false;
+ }
+
+ free(value);
+
+ return ret;
+}
+
+static int xen_get_pcie_params(XenArmState *xam)
+{
+ char path[128], *value = NULL, **entries = NULL;
+ unsigned int len, tmp;
+ int rc = -1;
+
+ snprintf(path, sizeof(path), "device-model/%d/pcie_host",
+ xen_domid);
+ entries = xs_directory(xam->state->xenstore, XBT_NULL, path, &len);
+ if (entries == NULL) {
+ error_report("xenpv: 'pcie_host' dir is not present");
+ return -1;
+ }
+ free(entries);
+ if (len != 9) {
+ error_report("xenpv: Unexpected number of entries in 'pcie_host' dir");
+ goto out;
+ }
+
+ snprintf(path, sizeof(path), "device-model/%d/pcie_host/id",
+ xen_domid);
+ value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
+ if (qemu_strtoui(value, NULL, 10, &tmp)) {
+ error_report("xenpv: Failed to get 'pcie_host/id' prop");
+ goto out;
+ }
+ free(value);
+ value = NULL;
+ if (tmp > 0xffff) {
+ error_report("xenpv: Wrong 'pcie_host/id' value %u", tmp);
+ goto out;
+ }
+ xen_pci_segment = tmp;
+
+ if (!xen_read_pcie_prop(xam, xen_domid, "ecam_base",
+ &xam->pcie_ecam.base)) {
+ goto out;
+ }
+
+ if (!xen_read_pcie_prop(xam, xen_domid, "ecam_size",
+ &xam->pcie_ecam.size)) {
+ goto out;
+ }
+
+ if (!xen_read_pcie_prop(xam, xen_domid, "mem_base",
+ &xam->pcie_mmio.base)) {
+ goto out;
+ }
+
+ if (!xen_read_pcie_prop(xam, xen_domid, "mem_size",
+ &xam->pcie_mmio.base)) {
+ goto out;
+ }
+
+ if (!xen_read_pcie_prop(xam, xen_domid, "prefetch_mem_base",
+ &xam->pcie_mmio_high.base)) {
+ goto out;
+ }
+
+ if (!xen_read_pcie_prop(xam, xen_domid, "prefetch_mem_size",
+ &xam->pcie_mmio_high.size)) {
+ goto out;
+ }
+
+ snprintf(path, sizeof(path), "device-model/%d/pcie_host/irq_first",
+ xen_domid);
+ value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
+ if (qemu_strtoi(value, NULL, 10, &xam->pcie_irq)) {
+ error_report("xenpv: Failed to get 'pcie_host/irq_first' prop");
+ goto out;
+ }
+ free(value);
+ value = NULL;
+ DPRINTF("PCIe host bridge: irq_first %u\n", xam->pcie_irq);
+
+ snprintf(path, sizeof(path), "device-model/%d/pcie_host/num_irqs",
+ xen_domid);
+ value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
+ if (qemu_strtoui(value, NULL, 10, &tmp)) {
+ error_report("xenpv: Failed to get 'pcie_host/num_irqs' prop");
+ goto out;
+ }
+ free(value);
+ value = NULL;
+ if (tmp != GPEX_NUM_IRQS) {
+ error_report("xenpv: Wrong 'pcie_host/num_irqs' value %u", tmp);
+ goto out;
+ }
+ DPRINTF("PCIe host bridge: num_irqs %u\n", tmp);
+
+ rc = 0;
+out:
+ if (value) {
+ free(value);
+ }
+
+ return rc;
+}
+
void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
{
hw_error("Invalid ioreq type 0x%x\n", req->type);
@@ -189,6 +369,12 @@ static void xen_arm_init(MachineState *machine)
xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
xen_create_virtio_mmio_devices(xam);
+ if (!xen_get_pcie_params(xam)) {
+ xen_create_pcie(xam);
+ } else {
+ DPRINTF("PCIe host bridge is not available,"
+ "only virtio-mmio can be used\n");
+ }
#ifdef CONFIG_TPM
if (xam->cfg.tpm_base_addr) {
diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
index 565dc39c8f..0f78f15057 100644
--- a/hw/xen/xen-hvm-common.c
+++ b/hw/xen/xen-hvm-common.c
@@ -47,6 +47,8 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
g_free(pfn_list);
}
+uint16_t xen_pci_segment;
+
static void xen_set_memory(struct MemoryListener *listener,
MemoryRegionSection *section,
bool add)
@@ -382,7 +384,12 @@ static void cpu_ioreq_config(XenIOState *state, ioreq_t *req)
}
QLIST_FOREACH(xendev, &state->dev_list, entry) {
- if (xendev->sbdf != sbdf) {
+ /*
+ * As we append xen_pci_segment just before forming dm_op in
+ * xen_map_pcidev() we need to check with appended xen_pci_segment
+ * here as well.
+ */
+ if ((xendev->sbdf | (xen_pci_segment << 16)) != sbdf) {
continue;
}
diff --git a/include/hw/xen/xen_native.h b/include/hw/xen/xen_native.h
index 6f09c48823..2b1debaff4 100644
--- a/include/hw/xen/xen_native.h
+++ b/include/hw/xen/xen_native.h
@@ -431,6 +431,8 @@ static inline void xen_unmap_io_section(domid_t dom,
0, start_addr, end_addr);
}
+extern uint16_t xen_pci_segment;
+
static inline void xen_map_pcidev(domid_t dom,
ioservid_t ioservid,
PCIDevice *pci_dev)
@@ -441,7 +443,8 @@ static inline void xen_map_pcidev(domid_t dom,
trace_xen_map_pcidev(ioservid, pci_dev_bus_num(pci_dev),
PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
- xendevicemodel_map_pcidev_to_ioreq_server(xen_dmod, dom, ioservid, 0,
+ xendevicemodel_map_pcidev_to_ioreq_server(xen_dmod, dom, ioservid,
+ xen_pci_segment,
pci_dev_bus_num(pci_dev),
PCI_SLOT(pci_dev->devfn),
PCI_FUNC(pci_dev->devfn));
@@ -457,7 +460,8 @@ static inline void xen_unmap_pcidev(domid_t dom,
trace_xen_unmap_pcidev(ioservid, pci_dev_bus_num(pci_dev),
PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
- xendevicemodel_unmap_pcidev_from_ioreq_server(xen_dmod, dom, ioservid, 0,
+ xendevicemodel_unmap_pcidev_from_ioreq_server(xen_dmod, dom, ioservid,
+ xen_pci_segment,
pci_dev_bus_num(pci_dev),
PCI_SLOT(pci_dev->devfn),
PCI_FUNC(pci_dev->devfn));
--
2.42.0
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [PATCH v2 6/6] xen_arm: Add virtual PCIe host bridge support
2023-11-21 22:10 ` [PATCH v2 6/6] xen_arm: Add virtual PCIe host bridge support Volodymyr Babchuk
@ 2023-11-22 22:39 ` Stefano Stabellini
2023-11-22 23:44 ` Vikram Garhwal
2023-11-24 12:30 ` Igor Mammedov
1 sibling, 1 reply; 52+ messages in thread
From: Stefano Stabellini @ 2023-11-22 22:39 UTC (permalink / raw)
To: Volodymyr Babchuk
Cc: qemu-devel@nongnu.org, David Woodhouse, Stefano Stabellini,
Julien Grall, Oleksandr Tyshchenko, Peter Maydell, Anthony Perard,
Paul Durrant, open list:ARM TCG CPUs, open list:X86 Xen CPUs,
vikram.garhwal
+Vikram
On Tue, 21 Nov 2023, Volodymyr Babchuk wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> The bridge is needed for virtio-pci support, as QEMU can emulate the
> whole bridge with any virtio-pci devices connected to it.
>
> This patch provides a flexible way to configure PCIe brige resources
> with xenstore. We made this for several reasons:
>
> - We don't want to clash with vPCI devices, so we need information
> from Xen toolstack on which PCI bus to use.
> - The guest memory layout that describes these resources is not stable
> and may vary between guests, so we cannot rely on static resources
> to be always the same for both ends.
> - Also the device-models which run in different domains and serve
> virtio-pci devices for the same guest should use different host
> bridge resources for Xen to distinguish. The rule for the guest
> device-tree generation is one PCI host bridge per backend domain.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
There is still a discussion ongoing on xen-devel if / how to register a
PCI Root Complex or individual BDFs. In the meantime a couple of
comments.
Typically emulated devices are configured in QEMU via QEMU command line
parameters, not xenstore. If you are running QEMU in a domU (instead of
Dom0) you can always read config parameters from xenstore from a wrapper
bash script (using xenstore-read) and then pass the right command line
options to QEMU.
If you need help in adding new QEMU command line options, Vikram (CCed)
can help.
> ---
>
> Changes from v1:
>
> - Renamed virtio_pci_host to pcie_host entries in XenStore, because
> there is nothing specific to virtio-pci: any PCI device can be
> emulated via this newly created bridge.
> ---
> hw/arm/xen_arm.c | 186 ++++++++++++++++++++++++++++++++++++
> hw/xen/xen-hvm-common.c | 9 +-
> include/hw/xen/xen_native.h | 8 +-
> 3 files changed, 200 insertions(+), 3 deletions(-)
>
> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> index b9c3ae14b6..d506d55d0f 100644
> --- a/hw/arm/xen_arm.c
> +++ b/hw/arm/xen_arm.c
> @@ -22,6 +22,7 @@
> */
>
> #include "qemu/osdep.h"
> +#include "qemu/cutils.h"
> #include "qemu/error-report.h"
> #include "qapi/qapi-commands-migration.h"
> #include "qapi/visitor.h"
> @@ -34,6 +35,9 @@
> #include "hw/xen/xen-hvm-common.h"
> #include "sysemu/tpm.h"
> #include "hw/xen/arch_hvm.h"
> +#include "exec/address-spaces.h"
> +#include "hw/pci-host/gpex.h"
> +#include "hw/virtio/virtio-pci.h"
>
> #define TYPE_XEN_ARM MACHINE_TYPE_NAME("xenpvh")
> OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
> @@ -58,6 +62,11 @@ struct XenArmState {
> struct {
> uint64_t tpm_base_addr;
> } cfg;
> +
> + MemMapEntry pcie_mmio;
> + MemMapEntry pcie_ecam;
> + MemMapEntry pcie_mmio_high;
> + int pcie_irq;
> };
>
> static MemoryRegion ram_lo, ram_hi;
> @@ -73,6 +82,7 @@ static MemoryRegion ram_lo, ram_hi;
> #define NR_VIRTIO_MMIO_DEVICES \
> (GUEST_VIRTIO_MMIO_SPI_LAST - GUEST_VIRTIO_MMIO_SPI_FIRST)
>
> +/* TODO It should be xendevicemodel_set_pci_intx_level() for PCI interrupts. */
> static void xen_set_irq(void *opaque, int irq, int level)
> {
> if (xendevicemodel_set_irq_level(xen_dmod, xen_domid, irq, level)) {
> @@ -129,6 +139,176 @@ static void xen_init_ram(MachineState *machine)
> }
> }
>
> +static void xen_create_pcie(XenArmState *xam)
> +{
> + MemoryRegion *mmio_alias, *mmio_alias_high, *mmio_reg;
> + MemoryRegion *ecam_alias, *ecam_reg;
> + DeviceState *dev;
> + int i;
> +
> + dev = qdev_new(TYPE_GPEX_HOST);
> + sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
> +
> + /* Map ECAM space */
> + ecam_alias = g_new0(MemoryRegion, 1);
> + ecam_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
> + memory_region_init_alias(ecam_alias, OBJECT(dev), "pcie-ecam",
> + ecam_reg, 0, xam->pcie_ecam.size);
> + memory_region_add_subregion(get_system_memory(), xam->pcie_ecam.base,
> + ecam_alias);
> +
> + /* Map the MMIO space */
> + mmio_alias = g_new0(MemoryRegion, 1);
> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
> + mmio_reg,
> + xam->pcie_mmio.base,
> + xam->pcie_mmio.size);
> + memory_region_add_subregion(get_system_memory(), xam->pcie_mmio.base,
> + mmio_alias);
> +
> + /* Map the MMIO_HIGH space */
> + mmio_alias_high = g_new0(MemoryRegion, 1);
> + memory_region_init_alias(mmio_alias_high, OBJECT(dev), "pcie-mmio-high",
> + mmio_reg,
> + xam->pcie_mmio_high.base,
> + xam->pcie_mmio_high.size);
> + memory_region_add_subregion(get_system_memory(), xam->pcie_mmio_high.base,
> + mmio_alias_high);
> +
> + /* Legacy PCI interrupts (#INTA - #INTD) */
> + for (i = 0; i < GPEX_NUM_IRQS; i++) {
> + qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
> + xam->pcie_irq + i);
> +
> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
> + gpex_set_irq_num(GPEX_HOST(dev), i, xam->pcie_irq + i);
> + }
> +
> + DPRINTF("Created PCIe host bridge\n");
> +}
> +
> +static bool xen_read_pcie_prop(XenArmState *xam, unsigned int xen_domid,
> + const char *prop_name, unsigned long *data)
> +{
> + char path[128], *value = NULL;
> + unsigned int len;
> + bool ret = true;
> +
> + snprintf(path, sizeof(path), "device-model/%d/pcie_host/%s",
> + xen_domid, prop_name);
> + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
> +
> + if (qemu_strtou64(value, NULL, 16, data)) {
> + error_report("xenpv: Failed to get 'pcie_host/%s' prop",
> + prop_name);
> + ret = false;
> + }
> +
> + free(value);
> +
> + return ret;
> +}
> +
> +static int xen_get_pcie_params(XenArmState *xam)
> +{
> + char path[128], *value = NULL, **entries = NULL;
> + unsigned int len, tmp;
> + int rc = -1;
> +
> + snprintf(path, sizeof(path), "device-model/%d/pcie_host",
> + xen_domid);
> + entries = xs_directory(xam->state->xenstore, XBT_NULL, path, &len);
> + if (entries == NULL) {
> + error_report("xenpv: 'pcie_host' dir is not present");
> + return -1;
> + }
> + free(entries);
> + if (len != 9) {
> + error_report("xenpv: Unexpected number of entries in 'pcie_host' dir");
> + goto out;
> + }
> +
> + snprintf(path, sizeof(path), "device-model/%d/pcie_host/id",
> + xen_domid);
> + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
> + if (qemu_strtoui(value, NULL, 10, &tmp)) {
> + error_report("xenpv: Failed to get 'pcie_host/id' prop");
> + goto out;
> + }
> + free(value);
> + value = NULL;
> + if (tmp > 0xffff) {
> + error_report("xenpv: Wrong 'pcie_host/id' value %u", tmp);
> + goto out;
> + }
> + xen_pci_segment = tmp;
> +
> + if (!xen_read_pcie_prop(xam, xen_domid, "ecam_base",
> + &xam->pcie_ecam.base)) {
> + goto out;
> + }
> +
> + if (!xen_read_pcie_prop(xam, xen_domid, "ecam_size",
> + &xam->pcie_ecam.size)) {
> + goto out;
> + }
> +
> + if (!xen_read_pcie_prop(xam, xen_domid, "mem_base",
> + &xam->pcie_mmio.base)) {
> + goto out;
> + }
> +
> + if (!xen_read_pcie_prop(xam, xen_domid, "mem_size",
> + &xam->pcie_mmio.base)) {
> + goto out;
> + }
> +
> + if (!xen_read_pcie_prop(xam, xen_domid, "prefetch_mem_base",
> + &xam->pcie_mmio_high.base)) {
> + goto out;
> + }
> +
> + if (!xen_read_pcie_prop(xam, xen_domid, "prefetch_mem_size",
> + &xam->pcie_mmio_high.size)) {
> + goto out;
> + }
> +
> + snprintf(path, sizeof(path), "device-model/%d/pcie_host/irq_first",
> + xen_domid);
> + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
> + if (qemu_strtoi(value, NULL, 10, &xam->pcie_irq)) {
> + error_report("xenpv: Failed to get 'pcie_host/irq_first' prop");
> + goto out;
> + }
> + free(value);
> + value = NULL;
> + DPRINTF("PCIe host bridge: irq_first %u\n", xam->pcie_irq);
> +
> + snprintf(path, sizeof(path), "device-model/%d/pcie_host/num_irqs",
> + xen_domid);
> + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
> + if (qemu_strtoui(value, NULL, 10, &tmp)) {
> + error_report("xenpv: Failed to get 'pcie_host/num_irqs' prop");
> + goto out;
> + }
> + free(value);
> + value = NULL;
> + if (tmp != GPEX_NUM_IRQS) {
> + error_report("xenpv: Wrong 'pcie_host/num_irqs' value %u", tmp);
> + goto out;
> + }
> + DPRINTF("PCIe host bridge: num_irqs %u\n", tmp);
> +
> + rc = 0;
> +out:
> + if (value) {
> + free(value);
> + }
> +
> + return rc;
> +}
> +
> void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
> {
> hw_error("Invalid ioreq type 0x%x\n", req->type);
> @@ -189,6 +369,12 @@ static void xen_arm_init(MachineState *machine)
> xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
>
> xen_create_virtio_mmio_devices(xam);
> + if (!xen_get_pcie_params(xam)) {
> + xen_create_pcie(xam);
> + } else {
> + DPRINTF("PCIe host bridge is not available,"
> + "only virtio-mmio can be used\n");
> + }
>
> #ifdef CONFIG_TPM
> if (xam->cfg.tpm_base_addr) {
> diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
> index 565dc39c8f..0f78f15057 100644
> --- a/hw/xen/xen-hvm-common.c
> +++ b/hw/xen/xen-hvm-common.c
> @@ -47,6 +47,8 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> g_free(pfn_list);
> }
>
> +uint16_t xen_pci_segment;
> +
> static void xen_set_memory(struct MemoryListener *listener,
> MemoryRegionSection *section,
> bool add)
> @@ -382,7 +384,12 @@ static void cpu_ioreq_config(XenIOState *state, ioreq_t *req)
> }
>
> QLIST_FOREACH(xendev, &state->dev_list, entry) {
> - if (xendev->sbdf != sbdf) {
> + /*
> + * As we append xen_pci_segment just before forming dm_op in
> + * xen_map_pcidev() we need to check with appended xen_pci_segment
> + * here as well.
> + */
> + if ((xendev->sbdf | (xen_pci_segment << 16)) != sbdf) {
> continue;
> }
>
> diff --git a/include/hw/xen/xen_native.h b/include/hw/xen/xen_native.h
> index 6f09c48823..2b1debaff4 100644
> --- a/include/hw/xen/xen_native.h
> +++ b/include/hw/xen/xen_native.h
> @@ -431,6 +431,8 @@ static inline void xen_unmap_io_section(domid_t dom,
> 0, start_addr, end_addr);
> }
>
> +extern uint16_t xen_pci_segment;
> +
> static inline void xen_map_pcidev(domid_t dom,
> ioservid_t ioservid,
> PCIDevice *pci_dev)
> @@ -441,7 +443,8 @@ static inline void xen_map_pcidev(domid_t dom,
>
> trace_xen_map_pcidev(ioservid, pci_dev_bus_num(pci_dev),
> PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
> - xendevicemodel_map_pcidev_to_ioreq_server(xen_dmod, dom, ioservid, 0,
> + xendevicemodel_map_pcidev_to_ioreq_server(xen_dmod, dom, ioservid,
> + xen_pci_segment,
> pci_dev_bus_num(pci_dev),
> PCI_SLOT(pci_dev->devfn),
> PCI_FUNC(pci_dev->devfn));
> @@ -457,7 +460,8 @@ static inline void xen_unmap_pcidev(domid_t dom,
>
> trace_xen_unmap_pcidev(ioservid, pci_dev_bus_num(pci_dev),
> PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
> - xendevicemodel_unmap_pcidev_from_ioreq_server(xen_dmod, dom, ioservid, 0,
> + xendevicemodel_unmap_pcidev_from_ioreq_server(xen_dmod, dom, ioservid,
> + xen_pci_segment,
> pci_dev_bus_num(pci_dev),
> PCI_SLOT(pci_dev->devfn),
> PCI_FUNC(pci_dev->devfn));
> --
> 2.42.0
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 6/6] xen_arm: Add virtual PCIe host bridge support
2023-11-22 22:39 ` Stefano Stabellini
@ 2023-11-22 23:44 ` Vikram Garhwal
2023-11-23 0:11 ` Volodymyr Babchuk
0 siblings, 1 reply; 52+ messages in thread
From: Vikram Garhwal @ 2023-11-22 23:44 UTC (permalink / raw)
To: Stefano Stabellini
Cc: Volodymyr Babchuk, qemu-devel@nongnu.org, David Woodhouse,
Julien Grall, Oleksandr Tyshchenko, Peter Maydell, Anthony Perard,
Paul Durrant, open list:ARM TCG CPUs, open list:X86 Xen CPUs
Hi Volodymyr,
Thank you sharing this patch. I have few comments below
On Wed, Nov 22, 2023 at 02:39:46PM -0800, Stefano Stabellini wrote:
> +Vikram
>
> On Tue, 21 Nov 2023, Volodymyr Babchuk wrote:
> > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> >
> > The bridge is needed for virtio-pci support, as QEMU can emulate the
> > whole bridge with any virtio-pci devices connected to it.
> >
> > This patch provides a flexible way to configure PCIe brige resources
> > with xenstore. We made this for several reasons:
> >
> > - We don't want to clash with vPCI devices, so we need information
> > from Xen toolstack on which PCI bus to use.
> > - The guest memory layout that describes these resources is not stable
> > and may vary between guests, so we cannot rely on static resources
> > to be always the same for both ends.
> > - Also the device-models which run in different domains and serve
> > virtio-pci devices for the same guest should use different host
> > bridge resources for Xen to distinguish. The rule for the guest
> > device-tree generation is one PCI host bridge per backend domain.
> >
> > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>
> There is still a discussion ongoing on xen-devel if / how to register a
> PCI Root Complex or individual BDFs. In the meantime a couple of
> comments.
>
> Typically emulated devices are configured in QEMU via QEMU command line
> parameters, not xenstore. If you are running QEMU in a domU (instead of
> Dom0) you can always read config parameters from xenstore from a wrapper
> bash script (using xenstore-read) and then pass the right command line
> options to QEMU.
>
> If you need help in adding new QEMU command line options, Vikram (CCed)
> can help.
>
>
I agree with Stefano here. Setting properties would be better and easier.
I have one questions here:
1. If there are multiple QEMU backends, which means xen tools will end up
creating lot of entries in xenstore and we need to remove those xenstore
entries when backend goes away. Is this already handled in Xen tools?
Regards,
Vikram
> > ---
> >
> > Changes from v1:
> >
> > - Renamed virtio_pci_host to pcie_host entries in XenStore, because
> > there is nothing specific to virtio-pci: any PCI device can be
> > emulated via this newly created bridge.
> > ---
> > hw/arm/xen_arm.c | 186 ++++++++++++++++++++++++++++++++++++
> > hw/xen/xen-hvm-common.c | 9 +-
> > include/hw/xen/xen_native.h | 8 +-
> > 3 files changed, 200 insertions(+), 3 deletions(-)
> >
> > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> > index b9c3ae14b6..d506d55d0f 100644
> > --- a/hw/arm/xen_arm.c
> > +++ b/hw/arm/xen_arm.c
> > @@ -22,6 +22,7 @@
> > */
> >
> > #include "qemu/osdep.h"
> > +#include "qemu/cutils.h"
> > #include "qemu/error-report.h"
> > #include "qapi/qapi-commands-migration.h"
> > #include "qapi/visitor.h"
> > @@ -34,6 +35,9 @@
> > #include "hw/xen/xen-hvm-common.h"
> > #include "sysemu/tpm.h"
> > #include "hw/xen/arch_hvm.h"
> > +#include "exec/address-spaces.h"
> > +#include "hw/pci-host/gpex.h"
> > +#include "hw/virtio/virtio-pci.h"
> >
> > #define TYPE_XEN_ARM MACHINE_TYPE_NAME("xenpvh")
> > OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
> > @@ -58,6 +62,11 @@ struct XenArmState {
> > struct {
> > uint64_t tpm_base_addr;
> > } cfg;
> > +
> > + MemMapEntry pcie_mmio;
> > + MemMapEntry pcie_ecam;
> > + MemMapEntry pcie_mmio_high;
> > + int pcie_irq;
> > };
> >
> > static MemoryRegion ram_lo, ram_hi;
> > @@ -73,6 +82,7 @@ static MemoryRegion ram_lo, ram_hi;
> > #define NR_VIRTIO_MMIO_DEVICES \
> > (GUEST_VIRTIO_MMIO_SPI_LAST - GUEST_VIRTIO_MMIO_SPI_FIRST)
> >
> > +/* TODO It should be xendevicemodel_set_pci_intx_level() for PCI interrupts. */
> > static void xen_set_irq(void *opaque, int irq, int level)
> > {
> > if (xendevicemodel_set_irq_level(xen_dmod, xen_domid, irq, level)) {
> > @@ -129,6 +139,176 @@ static void xen_init_ram(MachineState *machine)
> > }
> > }
> >
> > +static void xen_create_pcie(XenArmState *xam)
> > +{
> > + MemoryRegion *mmio_alias, *mmio_alias_high, *mmio_reg;
> > + MemoryRegion *ecam_alias, *ecam_reg;
> > + DeviceState *dev;
> > + int i;
> > +
> > + dev = qdev_new(TYPE_GPEX_HOST);
> > + sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
> > +
> > + /* Map ECAM space */
> > + ecam_alias = g_new0(MemoryRegion, 1);
> > + ecam_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
> > + memory_region_init_alias(ecam_alias, OBJECT(dev), "pcie-ecam",
> > + ecam_reg, 0, xam->pcie_ecam.size);
> > + memory_region_add_subregion(get_system_memory(), xam->pcie_ecam.base,
> > + ecam_alias);
> > +
> > + /* Map the MMIO space */
> > + mmio_alias = g_new0(MemoryRegion, 1);
> > + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
> > + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
> > + mmio_reg,
> > + xam->pcie_mmio.base,
> > + xam->pcie_mmio.size);
> > + memory_region_add_subregion(get_system_memory(), xam->pcie_mmio.base,
> > + mmio_alias);
> > +
> > + /* Map the MMIO_HIGH space */
> > + mmio_alias_high = g_new0(MemoryRegion, 1);
> > + memory_region_init_alias(mmio_alias_high, OBJECT(dev), "pcie-mmio-high",
> > + mmio_reg,
> > + xam->pcie_mmio_high.base,
> > + xam->pcie_mmio_high.size);
> > + memory_region_add_subregion(get_system_memory(), xam->pcie_mmio_high.base,
> > + mmio_alias_high);
> > +
> > + /* Legacy PCI interrupts (#INTA - #INTD) */
> > + for (i = 0; i < GPEX_NUM_IRQS; i++) {
> > + qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
> > + xam->pcie_irq + i);
> > +
> > + sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
> > + gpex_set_irq_num(GPEX_HOST(dev), i, xam->pcie_irq + i);
> > + }
> > +
> > + DPRINTF("Created PCIe host bridge\n");
> > +}
> > +
> > +static bool xen_read_pcie_prop(XenArmState *xam, unsigned int xen_domid,
> > + const char *prop_name, unsigned long *data)
> > +{
> > + char path[128], *value = NULL;
> > + unsigned int len;
> > + bool ret = true;
> > +
> > + snprintf(path, sizeof(path), "device-model/%d/pcie_host/%s",
> > + xen_domid, prop_name);
> > + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
> > +
> > + if (qemu_strtou64(value, NULL, 16, data)) {
> > + error_report("xenpv: Failed to get 'pcie_host/%s' prop",
> > + prop_name);
> > + ret = false;
> > + }
> > +
> > + free(value);
> > +
> > + return ret;
> > +}
> > +
> > +static int xen_get_pcie_params(XenArmState *xam)
> > +{
> > + char path[128], *value = NULL, **entries = NULL;
> > + unsigned int len, tmp;
> > + int rc = -1;
> > +
> > + snprintf(path, sizeof(path), "device-model/%d/pcie_host",
> > + xen_domid);
> > + entries = xs_directory(xam->state->xenstore, XBT_NULL, path, &len);
> > + if (entries == NULL) {
> > + error_report("xenpv: 'pcie_host' dir is not present");
> > + return -1;
> > + }
> > + free(entries);
> > + if (len != 9) {
> > + error_report("xenpv: Unexpected number of entries in 'pcie_host' dir");
> > + goto out;
> > + }
> > +
> > + snprintf(path, sizeof(path), "device-model/%d/pcie_host/id",
> > + xen_domid);
> > + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
> > + if (qemu_strtoui(value, NULL, 10, &tmp)) {
> > + error_report("xenpv: Failed to get 'pcie_host/id' prop");
> > + goto out;
> > + }
> > + free(value);
> > + value = NULL;
> > + if (tmp > 0xffff) {
> > + error_report("xenpv: Wrong 'pcie_host/id' value %u", tmp);
> > + goto out;
> > + }
> > + xen_pci_segment = tmp;
> > +
> > + if (!xen_read_pcie_prop(xam, xen_domid, "ecam_base",
> > + &xam->pcie_ecam.base)) {
> > + goto out;
> > + }
> > +
> > + if (!xen_read_pcie_prop(xam, xen_domid, "ecam_size",
> > + &xam->pcie_ecam.size)) {
> > + goto out;
> > + }
> > +
> > + if (!xen_read_pcie_prop(xam, xen_domid, "mem_base",
> > + &xam->pcie_mmio.base)) {
> > + goto out;
> > + }
> > +
> > + if (!xen_read_pcie_prop(xam, xen_domid, "mem_size",
> > + &xam->pcie_mmio.base)) {
> > + goto out;
> > + }
> > +
> > + if (!xen_read_pcie_prop(xam, xen_domid, "prefetch_mem_base",
> > + &xam->pcie_mmio_high.base)) {
> > + goto out;
> > + }
> > +
> > + if (!xen_read_pcie_prop(xam, xen_domid, "prefetch_mem_size",
> > + &xam->pcie_mmio_high.size)) {
> > + goto out;
> > + }
> > +
> > + snprintf(path, sizeof(path), "device-model/%d/pcie_host/irq_first",
> > + xen_domid);
> > + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
> > + if (qemu_strtoi(value, NULL, 10, &xam->pcie_irq)) {
> > + error_report("xenpv: Failed to get 'pcie_host/irq_first' prop");
> > + goto out;
> > + }
> > + free(value);
> > + value = NULL;
> > + DPRINTF("PCIe host bridge: irq_first %u\n", xam->pcie_irq);
> > +
> > + snprintf(path, sizeof(path), "device-model/%d/pcie_host/num_irqs",
> > + xen_domid);
> > + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
> > + if (qemu_strtoui(value, NULL, 10, &tmp)) {
> > + error_report("xenpv: Failed to get 'pcie_host/num_irqs' prop");
> > + goto out;
> > + }
> > + free(value);
> > + value = NULL;
> > + if (tmp != GPEX_NUM_IRQS) {
> > + error_report("xenpv: Wrong 'pcie_host/num_irqs' value %u", tmp);
> > + goto out;
> > + }
> > + DPRINTF("PCIe host bridge: num_irqs %u\n", tmp);
> > +
> > + rc = 0;
> > +out:
> > + if (value) {
> > + free(value);
> > + }
> > +
> > + return rc;
> > +}
> > +
> > void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
> > {
> > hw_error("Invalid ioreq type 0x%x\n", req->type);
> > @@ -189,6 +369,12 @@ static void xen_arm_init(MachineState *machine)
> > xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
> >
> > xen_create_virtio_mmio_devices(xam);
> > + if (!xen_get_pcie_params(xam)) {
> > + xen_create_pcie(xam);
> > + } else {
> > + DPRINTF("PCIe host bridge is not available,"
> > + "only virtio-mmio can be used\n");
> > + }
> >
> > #ifdef CONFIG_TPM
> > if (xam->cfg.tpm_base_addr) {
> > diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
> > index 565dc39c8f..0f78f15057 100644
> > --- a/hw/xen/xen-hvm-common.c
> > +++ b/hw/xen/xen-hvm-common.c
> > @@ -47,6 +47,8 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> > g_free(pfn_list);
> > }
> >
> > +uint16_t xen_pci_segment;
> > +
> > static void xen_set_memory(struct MemoryListener *listener,
> > MemoryRegionSection *section,
> > bool add)
> > @@ -382,7 +384,12 @@ static void cpu_ioreq_config(XenIOState *state, ioreq_t *req)
> > }
> >
> > QLIST_FOREACH(xendev, &state->dev_list, entry) {
> > - if (xendev->sbdf != sbdf) {
> > + /*
> > + * As we append xen_pci_segment just before forming dm_op in
> > + * xen_map_pcidev() we need to check with appended xen_pci_segment
> > + * here as well.
> > + */
> > + if ((xendev->sbdf | (xen_pci_segment << 16)) != sbdf) {
> > continue;
> > }
> >
> > diff --git a/include/hw/xen/xen_native.h b/include/hw/xen/xen_native.h
> > index 6f09c48823..2b1debaff4 100644
> > --- a/include/hw/xen/xen_native.h
> > +++ b/include/hw/xen/xen_native.h
> > @@ -431,6 +431,8 @@ static inline void xen_unmap_io_section(domid_t dom,
> > 0, start_addr, end_addr);
> > }
> >
> > +extern uint16_t xen_pci_segment;
> > +
> > static inline void xen_map_pcidev(domid_t dom,
> > ioservid_t ioservid,
> > PCIDevice *pci_dev)
> > @@ -441,7 +443,8 @@ static inline void xen_map_pcidev(domid_t dom,
> >
> > trace_xen_map_pcidev(ioservid, pci_dev_bus_num(pci_dev),
> > PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
> > - xendevicemodel_map_pcidev_to_ioreq_server(xen_dmod, dom, ioservid, 0,
> > + xendevicemodel_map_pcidev_to_ioreq_server(xen_dmod, dom, ioservid,
> > + xen_pci_segment,
> > pci_dev_bus_num(pci_dev),
> > PCI_SLOT(pci_dev->devfn),
> > PCI_FUNC(pci_dev->devfn));
> > @@ -457,7 +460,8 @@ static inline void xen_unmap_pcidev(domid_t dom,
> >
> > trace_xen_unmap_pcidev(ioservid, pci_dev_bus_num(pci_dev),
> > PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
> > - xendevicemodel_unmap_pcidev_from_ioreq_server(xen_dmod, dom, ioservid, 0,
> > + xendevicemodel_unmap_pcidev_from_ioreq_server(xen_dmod, dom, ioservid,
> > + xen_pci_segment,
> > pci_dev_bus_num(pci_dev),
> > PCI_SLOT(pci_dev->devfn),
> > PCI_FUNC(pci_dev->devfn));
> > --
> > 2.42.0
> >
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 6/6] xen_arm: Add virtual PCIe host bridge support
2023-11-22 23:44 ` Vikram Garhwal
@ 2023-11-23 0:11 ` Volodymyr Babchuk
0 siblings, 0 replies; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-23 0:11 UTC (permalink / raw)
To: Vikram Garhwal
Cc: Stefano Stabellini, qemu-devel@nongnu.org, David Woodhouse,
Julien Grall, Oleksandr Tyshchenko, Peter Maydell, Anthony Perard,
Paul Durrant, open list:ARM TCG CPUs, open list:X86 Xen CPUs
Hi Vikram,
Vikram Garhwal <vikram.garhwal@amd.com> writes:
> Hi Volodymyr,
> Thank you sharing this patch. I have few comments below
> On Wed, Nov 22, 2023 at 02:39:46PM -0800, Stefano Stabellini wrote:
>> +Vikram
>>
>> On Tue, 21 Nov 2023, Volodymyr Babchuk wrote:
>> > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> >
>> > The bridge is needed for virtio-pci support, as QEMU can emulate the
>> > whole bridge with any virtio-pci devices connected to it.
>> >
>> > This patch provides a flexible way to configure PCIe brige resources
>> > with xenstore. We made this for several reasons:
>> >
>> > - We don't want to clash with vPCI devices, so we need information
>> > from Xen toolstack on which PCI bus to use.
>> > - The guest memory layout that describes these resources is not stable
>> > and may vary between guests, so we cannot rely on static resources
>> > to be always the same for both ends.
>> > - Also the device-models which run in different domains and serve
>> > virtio-pci devices for the same guest should use different host
>> > bridge resources for Xen to distinguish. The rule for the guest
>> > device-tree generation is one PCI host bridge per backend domain.
>> >
>> > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> > Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>>
>> There is still a discussion ongoing on xen-devel if / how to register a
>> PCI Root Complex or individual BDFs. In the meantime a couple of
>> comments.
>>
>> Typically emulated devices are configured in QEMU via QEMU command line
>> parameters, not xenstore. If you are running QEMU in a domU (instead of
>> Dom0) you can always read config parameters from xenstore from a wrapper
>> bash script (using xenstore-read) and then pass the right command line
>> options to QEMU.
>>
>> If you need help in adding new QEMU command line options, Vikram (CCed)
>> can help.
>>
>>
> I agree with Stefano here. Setting properties would be better and easier.
Okay, I'll look at this.
> I have one questions here:
> 1. If there are multiple QEMU backends, which means xen tools will end up
> creating lot of entries in xenstore and we need to remove those xenstore
> entries when backend goes away. Is this already handled in Xen tools?
Well, this is not a classic PV backend, so common code in Xen Tools does
not handle those entries. I am not sure if tools remove entries right
now, because I am not the original author. But we definitely will remove
them in the final version of patches.
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 6/6] xen_arm: Add virtual PCIe host bridge support
2023-11-21 22:10 ` [PATCH v2 6/6] xen_arm: Add virtual PCIe host bridge support Volodymyr Babchuk
2023-11-22 22:39 ` Stefano Stabellini
@ 2023-11-24 12:30 ` Igor Mammedov
2023-11-24 15:47 ` Volodymyr Babchuk
1 sibling, 1 reply; 52+ messages in thread
From: Igor Mammedov @ 2023-11-24 12:30 UTC (permalink / raw)
To: Volodymyr Babchuk
Cc: qemu-devel@nongnu.org, David Woodhouse, Stefano Stabellini,
Julien Grall, Oleksandr Tyshchenko, Peter Maydell, Anthony Perard,
Paul Durrant, open list:ARM TCG CPUs, open list:X86 Xen CPUs
On Tue, 21 Nov 2023 22:10:28 +0000
Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> The bridge is needed for virtio-pci support, as QEMU can emulate the
> whole bridge with any virtio-pci devices connected to it.
>
> This patch provides a flexible way to configure PCIe brige resources
> with xenstore. We made this for several reasons:
>
> - We don't want to clash with vPCI devices, so we need information
> from Xen toolstack on which PCI bus to use.
> - The guest memory layout that describes these resources is not stable
> and may vary between guests, so we cannot rely on static resources
> to be always the same for both ends.
> - Also the device-models which run in different domains and serve
> virtio-pci devices for the same guest should use different host
> bridge resources for Xen to distinguish. The rule for the guest
> device-tree generation is one PCI host bridge per backend domain.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>
> ---
>
> Changes from v1:
>
> - Renamed virtio_pci_host to pcie_host entries in XenStore, because
> there is nothing specific to virtio-pci: any PCI device can be
> emulated via this newly created bridge.
> ---
> hw/arm/xen_arm.c | 186 ++++++++++++++++++++++++++++++++++++
> hw/xen/xen-hvm-common.c | 9 +-
> include/hw/xen/xen_native.h | 8 +-
> 3 files changed, 200 insertions(+), 3 deletions(-)
>
> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> index b9c3ae14b6..d506d55d0f 100644
> --- a/hw/arm/xen_arm.c
> +++ b/hw/arm/xen_arm.c
> @@ -22,6 +22,7 @@
> */
>
> #include "qemu/osdep.h"
> +#include "qemu/cutils.h"
> #include "qemu/error-report.h"
> #include "qapi/qapi-commands-migration.h"
> #include "qapi/visitor.h"
> @@ -34,6 +35,9 @@
> #include "hw/xen/xen-hvm-common.h"
> #include "sysemu/tpm.h"
> #include "hw/xen/arch_hvm.h"
> +#include "exec/address-spaces.h"
> +#include "hw/pci-host/gpex.h"
> +#include "hw/virtio/virtio-pci.h"
>
> #define TYPE_XEN_ARM MACHINE_TYPE_NAME("xenpvh")
> OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
> @@ -58,6 +62,11 @@ struct XenArmState {
> struct {
> uint64_t tpm_base_addr;
> } cfg;
> +
> + MemMapEntry pcie_mmio;
> + MemMapEntry pcie_ecam;
> + MemMapEntry pcie_mmio_high;
> + int pcie_irq;
> };
>
> static MemoryRegion ram_lo, ram_hi;
> @@ -73,6 +82,7 @@ static MemoryRegion ram_lo, ram_hi;
> #define NR_VIRTIO_MMIO_DEVICES \
> (GUEST_VIRTIO_MMIO_SPI_LAST - GUEST_VIRTIO_MMIO_SPI_FIRST)
>
> +/* TODO It should be xendevicemodel_set_pci_intx_level() for PCI interrupts. */
> static void xen_set_irq(void *opaque, int irq, int level)
> {
> if (xendevicemodel_set_irq_level(xen_dmod, xen_domid, irq, level)) {
> @@ -129,6 +139,176 @@ static void xen_init_ram(MachineState *machine)
> }
> }
>
> +static void xen_create_pcie(XenArmState *xam)
> +{
> + MemoryRegion *mmio_alias, *mmio_alias_high, *mmio_reg;
> + MemoryRegion *ecam_alias, *ecam_reg;
> + DeviceState *dev;
> + int i;
> +
> + dev = qdev_new(TYPE_GPEX_HOST);
> + sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
> +
> + /* Map ECAM space */
> + ecam_alias = g_new0(MemoryRegion, 1);
> + ecam_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
> + memory_region_init_alias(ecam_alias, OBJECT(dev), "pcie-ecam",
> + ecam_reg, 0, xam->pcie_ecam.size);
> + memory_region_add_subregion(get_system_memory(), xam->pcie_ecam.base,
> + ecam_alias);
> +
> + /* Map the MMIO space */
> + mmio_alias = g_new0(MemoryRegion, 1);
> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
> + mmio_reg,
> + xam->pcie_mmio.base,
> + xam->pcie_mmio.size);
> + memory_region_add_subregion(get_system_memory(), xam->pcie_mmio.base,
> + mmio_alias);
> +
> + /* Map the MMIO_HIGH space */
> + mmio_alias_high = g_new0(MemoryRegion, 1);
> + memory_region_init_alias(mmio_alias_high, OBJECT(dev), "pcie-mmio-high",
> + mmio_reg,
> + xam->pcie_mmio_high.base,
> + xam->pcie_mmio_high.size);
> + memory_region_add_subregion(get_system_memory(), xam->pcie_mmio_high.base,
> + mmio_alias_high);
> +
> + /* Legacy PCI interrupts (#INTA - #INTD) */
> + for (i = 0; i < GPEX_NUM_IRQS; i++) {
> + qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
> + xam->pcie_irq + i);
> +
> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
> + gpex_set_irq_num(GPEX_HOST(dev), i, xam->pcie_irq + i);
> + }
> +
> + DPRINTF("Created PCIe host bridge\n");
replace DPRINTFs with trace_foo(), see 885f380f7be for example
> +}
> +
> +static bool xen_read_pcie_prop(XenArmState *xam, unsigned int xen_domid,
> + const char *prop_name, unsigned long *data)
> +{
> + char path[128], *value = NULL;
> + unsigned int len;
> + bool ret = true;
> +
> + snprintf(path, sizeof(path), "device-model/%d/pcie_host/%s",
> + xen_domid, prop_name);
try to avoid usage of snprintf() in series
see d2fe2264679 as an example
> + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
maybe use g_autofree
> +
> + if (qemu_strtou64(value, NULL, 16, data)) {
> + error_report("xenpv: Failed to get 'pcie_host/%s' prop",
> + prop_name);
if it's error, then better would be to use error_fatal so qemu would exit
on the 1st encounter.
> + ret = false;
> + }
> +
> + free(value);
> +
> + return ret;
> +}
> +
> +static int xen_get_pcie_params(XenArmState *xam)
> +{
> + char path[128], *value = NULL, **entries = NULL;
> + unsigned int len, tmp;
> + int rc = -1;
> +
> + snprintf(path, sizeof(path), "device-model/%d/pcie_host",
> + xen_domid);
> + entries = xs_directory(xam->state->xenstore, XBT_NULL, path, &len);
> + if (entries == NULL) {
> + error_report("xenpv: 'pcie_host' dir is not present");
> + return -1;
> + }
> + free(entries);
maybe use g_autofree for strings in this function
> + if (len != 9) {
> + error_report("xenpv: Unexpected number of entries in 'pcie_host' dir");
> + goto out;
> + }
> +
> + snprintf(path, sizeof(path), "device-model/%d/pcie_host/id",
> + xen_domid);
> + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
> + if (qemu_strtoui(value, NULL, 10, &tmp)) {
> + error_report("xenpv: Failed to get 'pcie_host/id' prop");
> + goto out;
> + }
> + free(value);
> + value = NULL;
and then get rid of pointless assignment
> + if (tmp > 0xffff) {
> + error_report("xenpv: Wrong 'pcie_host/id' value %u", tmp);
> + goto out;
> + }
> + xen_pci_segment = tmp;
> +
> + if (!xen_read_pcie_prop(xam, xen_domid, "ecam_base",
> + &xam->pcie_ecam.base)) {
> + goto out;
> + }
> +
> + if (!xen_read_pcie_prop(xam, xen_domid, "ecam_size",
> + &xam->pcie_ecam.size)) {
> + goto out;
> + }
> +
> + if (!xen_read_pcie_prop(xam, xen_domid, "mem_base",
> + &xam->pcie_mmio.base)) {
> + goto out;
> + }
> +
> + if (!xen_read_pcie_prop(xam, xen_domid, "mem_size",
> + &xam->pcie_mmio.base)) {
> + goto out;
> + }
> +
> + if (!xen_read_pcie_prop(xam, xen_domid, "prefetch_mem_base",
> + &xam->pcie_mmio_high.base)) {
> + goto out;
> + }
> +
> + if (!xen_read_pcie_prop(xam, xen_domid, "prefetch_mem_size",
> + &xam->pcie_mmio_high.size)) {
> + goto out;
> + }
> +
> + snprintf(path, sizeof(path), "device-model/%d/pcie_host/irq_first",
> + xen_domid);
> + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
> + if (qemu_strtoi(value, NULL, 10, &xam->pcie_irq)) {
> + error_report("xenpv: Failed to get 'pcie_host/irq_first' prop");
> + goto out;
> + }
> + free(value);
> + value = NULL;
> + DPRINTF("PCIe host bridge: irq_first %u\n", xam->pcie_irq);
> +
> + snprintf(path, sizeof(path), "device-model/%d/pcie_host/num_irqs",
> + xen_domid);
> + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
> + if (qemu_strtoui(value, NULL, 10, &tmp)) {
> + error_report("xenpv: Failed to get 'pcie_host/num_irqs' prop");
> + goto out;
> + }
> + free(value);
> + value = NULL;
> + if (tmp != GPEX_NUM_IRQS) {
> + error_report("xenpv: Wrong 'pcie_host/num_irqs' value %u", tmp);
> + goto out;
> + }
> + DPRINTF("PCIe host bridge: num_irqs %u\n", tmp);
> +
> + rc = 0;
> +out:
> + if (value) {
> + free(value);
> + }
> +
> + return rc;
> +}
> +
> void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
> {
> hw_error("Invalid ioreq type 0x%x\n", req->type);
> @@ -189,6 +369,12 @@ static void xen_arm_init(MachineState *machine)
> xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
>
> xen_create_virtio_mmio_devices(xam);
> + if (!xen_get_pcie_params(xam)) {
> + xen_create_pcie(xam);
> + } else {
> + DPRINTF("PCIe host bridge is not available,"
> + "only virtio-mmio can be used\n");
so if something went wrong, it will be just ignored.
Is it really expected behavior?
> + }
>
> #ifdef CONFIG_TPM
> if (xam->cfg.tpm_base_addr) {
> diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
> index 565dc39c8f..0f78f15057 100644
> --- a/hw/xen/xen-hvm-common.c
> +++ b/hw/xen/xen-hvm-common.c
> @@ -47,6 +47,8 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> g_free(pfn_list);
> }
>
> +uint16_t xen_pci_segment;
> +
> static void xen_set_memory(struct MemoryListener *listener,
> MemoryRegionSection *section,
> bool add)
> @@ -382,7 +384,12 @@ static void cpu_ioreq_config(XenIOState *state, ioreq_t *req)
> }
>
> QLIST_FOREACH(xendev, &state->dev_list, entry) {
> - if (xendev->sbdf != sbdf) {
> + /*
> + * As we append xen_pci_segment just before forming dm_op in
> + * xen_map_pcidev() we need to check with appended xen_pci_segment
> + * here as well.
> + */
> + if ((xendev->sbdf | (xen_pci_segment << 16)) != sbdf) {
> continue;
> }
>
> diff --git a/include/hw/xen/xen_native.h b/include/hw/xen/xen_native.h
> index 6f09c48823..2b1debaff4 100644
> --- a/include/hw/xen/xen_native.h
> +++ b/include/hw/xen/xen_native.h
> @@ -431,6 +431,8 @@ static inline void xen_unmap_io_section(domid_t dom,
> 0, start_addr, end_addr);
> }
>
> +extern uint16_t xen_pci_segment;
> +
> static inline void xen_map_pcidev(domid_t dom,
> ioservid_t ioservid,
> PCIDevice *pci_dev)
> @@ -441,7 +443,8 @@ static inline void xen_map_pcidev(domid_t dom,
>
> trace_xen_map_pcidev(ioservid, pci_dev_bus_num(pci_dev),
> PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
> - xendevicemodel_map_pcidev_to_ioreq_server(xen_dmod, dom, ioservid, 0,
> + xendevicemodel_map_pcidev_to_ioreq_server(xen_dmod, dom, ioservid,
> + xen_pci_segment,
> pci_dev_bus_num(pci_dev),
> PCI_SLOT(pci_dev->devfn),
> PCI_FUNC(pci_dev->devfn));
> @@ -457,7 +460,8 @@ static inline void xen_unmap_pcidev(domid_t dom,
>
> trace_xen_unmap_pcidev(ioservid, pci_dev_bus_num(pci_dev),
> PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
> - xendevicemodel_unmap_pcidev_from_ioreq_server(xen_dmod, dom, ioservid, 0,
> + xendevicemodel_unmap_pcidev_from_ioreq_server(xen_dmod, dom, ioservid,
> + xen_pci_segment,
> pci_dev_bus_num(pci_dev),
> PCI_SLOT(pci_dev->devfn),
> PCI_FUNC(pci_dev->devfn));
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 6/6] xen_arm: Add virtual PCIe host bridge support
2023-11-24 12:30 ` Igor Mammedov
@ 2023-11-24 15:47 ` Volodymyr Babchuk
0 siblings, 0 replies; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-24 15:47 UTC (permalink / raw)
To: Igor Mammedov
Cc: qemu-devel@nongnu.org, David Woodhouse, Stefano Stabellini,
Julien Grall, Oleksandr Tyshchenko, Peter Maydell, Anthony Perard,
Paul Durrant, open list:ARM TCG CPUs, open list:X86 Xen CPUs
Hi Igor,
Thank you for the review,
Igor Mammedov <imammedo@redhat.com> writes:
> On Tue, 21 Nov 2023 22:10:28 +0000
> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> wrote:
>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> The bridge is needed for virtio-pci support, as QEMU can emulate the
>> whole bridge with any virtio-pci devices connected to it.
>>
>> This patch provides a flexible way to configure PCIe brige resources
>> with xenstore. We made this for several reasons:
>>
>> - We don't want to clash with vPCI devices, so we need information
>> from Xen toolstack on which PCI bus to use.
>> - The guest memory layout that describes these resources is not stable
>> and may vary between guests, so we cannot rely on static resources
>> to be always the same for both ends.
>> - Also the device-models which run in different domains and serve
>> virtio-pci devices for the same guest should use different host
>> bridge resources for Xen to distinguish. The rule for the guest
>> device-tree generation is one PCI host bridge per backend domain.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>>
>> ---
>>
>> Changes from v1:
>>
>> - Renamed virtio_pci_host to pcie_host entries in XenStore, because
>> there is nothing specific to virtio-pci: any PCI device can be
>> emulated via this newly created bridge.
>> ---
>> hw/arm/xen_arm.c | 186 ++++++++++++++++++++++++++++++++++++
>> hw/xen/xen-hvm-common.c | 9 +-
>> include/hw/xen/xen_native.h | 8 +-
>> 3 files changed, 200 insertions(+), 3 deletions(-)
>>
>> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
>> index b9c3ae14b6..d506d55d0f 100644
>> --- a/hw/arm/xen_arm.c
>> +++ b/hw/arm/xen_arm.c
>> @@ -22,6 +22,7 @@
>> */
>>
>> #include "qemu/osdep.h"
>> +#include "qemu/cutils.h"
>> #include "qemu/error-report.h"
>> #include "qapi/qapi-commands-migration.h"
>> #include "qapi/visitor.h"
>> @@ -34,6 +35,9 @@
>> #include "hw/xen/xen-hvm-common.h"
>> #include "sysemu/tpm.h"
>> #include "hw/xen/arch_hvm.h"
>> +#include "exec/address-spaces.h"
>> +#include "hw/pci-host/gpex.h"
>> +#include "hw/virtio/virtio-pci.h"
>>
>> #define TYPE_XEN_ARM MACHINE_TYPE_NAME("xenpvh")
>> OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
>> @@ -58,6 +62,11 @@ struct XenArmState {
>> struct {
>> uint64_t tpm_base_addr;
>> } cfg;
>> +
>> + MemMapEntry pcie_mmio;
>> + MemMapEntry pcie_ecam;
>> + MemMapEntry pcie_mmio_high;
>> + int pcie_irq;
>> };
>>
>> static MemoryRegion ram_lo, ram_hi;
>> @@ -73,6 +82,7 @@ static MemoryRegion ram_lo, ram_hi;
>> #define NR_VIRTIO_MMIO_DEVICES \
>> (GUEST_VIRTIO_MMIO_SPI_LAST - GUEST_VIRTIO_MMIO_SPI_FIRST)
>>
>> +/* TODO It should be xendevicemodel_set_pci_intx_level() for PCI interrupts. */
>> static void xen_set_irq(void *opaque, int irq, int level)
>> {
>> if (xendevicemodel_set_irq_level(xen_dmod, xen_domid, irq, level)) {
>> @@ -129,6 +139,176 @@ static void xen_init_ram(MachineState *machine)
>> }
>> }
>>
>> +static void xen_create_pcie(XenArmState *xam)
>> +{
>> + MemoryRegion *mmio_alias, *mmio_alias_high, *mmio_reg;
>> + MemoryRegion *ecam_alias, *ecam_reg;
>> + DeviceState *dev;
>> + int i;
>> +
>> + dev = qdev_new(TYPE_GPEX_HOST);
>> + sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
>> +
>> + /* Map ECAM space */
>> + ecam_alias = g_new0(MemoryRegion, 1);
>> + ecam_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
>> + memory_region_init_alias(ecam_alias, OBJECT(dev), "pcie-ecam",
>> + ecam_reg, 0, xam->pcie_ecam.size);
>> + memory_region_add_subregion(get_system_memory(), xam->pcie_ecam.base,
>> + ecam_alias);
>> +
>> + /* Map the MMIO space */
>> + mmio_alias = g_new0(MemoryRegion, 1);
>> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
>> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
>> + mmio_reg,
>> + xam->pcie_mmio.base,
>> + xam->pcie_mmio.size);
>> + memory_region_add_subregion(get_system_memory(), xam->pcie_mmio.base,
>> + mmio_alias);
>> +
>> + /* Map the MMIO_HIGH space */
>> + mmio_alias_high = g_new0(MemoryRegion, 1);
>> + memory_region_init_alias(mmio_alias_high, OBJECT(dev), "pcie-mmio-high",
>> + mmio_reg,
>> + xam->pcie_mmio_high.base,
>> + xam->pcie_mmio_high.size);
>> + memory_region_add_subregion(get_system_memory(), xam->pcie_mmio_high.base,
>> + mmio_alias_high);
>> +
>> + /* Legacy PCI interrupts (#INTA - #INTD) */
>> + for (i = 0; i < GPEX_NUM_IRQS; i++) {
>> + qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
>> + xam->pcie_irq + i);
>> +
>> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
>> + gpex_set_irq_num(GPEX_HOST(dev), i, xam->pcie_irq + i);
>> + }
>> +
>
>> + DPRINTF("Created PCIe host bridge\n");
> replace DPRINTFs with trace_foo(), see 885f380f7be for example
>
>> +}
>> +
>> +static bool xen_read_pcie_prop(XenArmState *xam, unsigned int xen_domid,
>> + const char *prop_name, unsigned long *data)
>> +{
>> + char path[128], *value = NULL;
>> + unsigned int len;
>> + bool ret = true;
>> +
>> + snprintf(path, sizeof(path), "device-model/%d/pcie_host/%s",
>> + xen_domid, prop_name);
>
> try to avoid usage of snprintf() in series
> see d2fe2264679 as an example
>
This whole function will be dropped in the next version.
>> + value = xs_read(xam->state->xenstore, XBT_NULL, path, &len);
> maybe use g_autofree
I am not sure if this is a good idea, as xs_read() is provided by an
external library which uses plain old malloc().
>> xen_create_virtio_mmio_devices(xam);
>> + if (!xen_get_pcie_params(xam)) {
>> + xen_create_pcie(xam);
>> + } else {
>> + DPRINTF("PCIe host bridge is not available,"
>> + "only virtio-mmio can be used\n");
>
> so if something went wrong, it will be just ignored.
> Is it really expected behavior?
>
In the new version I reworked this section. Now we have 3 possible
outcomes:
1. No PCIe config was provided at all. This is fine, as user don't want
to use PCIe.
2. Full PCIe config was provided. We continue to creating the PCIe bridge.
3. Partial config was provided. This is an error and we exit.
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-21 22:10 [PATCH v2 0/6] xen-arm: add support for virtio-pci Volodymyr Babchuk
` (4 preceding siblings ...)
2023-11-21 22:10 ` [PATCH v2 6/6] xen_arm: Add virtual PCIe host bridge support Volodymyr Babchuk
@ 2023-11-21 22:10 ` Volodymyr Babchuk
2023-11-22 17:11 ` Paul Durrant
5 siblings, 1 reply; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-21 22:10 UTC (permalink / raw)
To: qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall,
Oleksandr Tyshchenko, Volodymyr Babchuk, Anthony Perard,
Paul Durrant, open list:X86 Xen CPUs
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
inherit the owner of the directory.
Note that for other than Dom0 domain (non toolstack domain) the
"driver_domain" property should be set in domain config file for the
toolstack to create required directories in advance.
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
hw/xen/xen_pvdev.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/hw/xen/xen_pvdev.c b/hw/xen/xen_pvdev.c
index c5ad71e8dc..42bdd4f6c8 100644
--- a/hw/xen/xen_pvdev.c
+++ b/hw/xen/xen_pvdev.c
@@ -60,7 +60,8 @@ void xen_config_cleanup(void)
int xenstore_mkdir(char *path, int p)
{
- if (!qemu_xen_xs_create(xenstore, 0, 0, xen_domid, p, path)) {
+ if (!qemu_xen_xs_create(xenstore, 0, XS_PRESERVE_OWNER,
+ xen_domid, p, path)) {
xen_pv_printf(NULL, 0, "xs_mkdir %s: failed\n", path);
return -1;
}
--
2.42.0
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-21 22:10 ` [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory Volodymyr Babchuk
@ 2023-11-22 17:11 ` Paul Durrant
2023-11-22 22:29 ` Stefano Stabellini
0 siblings, 1 reply; 52+ messages in thread
From: Paul Durrant @ 2023-11-22 17:11 UTC (permalink / raw)
To: Volodymyr Babchuk, qemu-devel@nongnu.org
Cc: David Woodhouse, Stefano Stabellini, Julien Grall,
Oleksandr Tyshchenko, Anthony Perard, open list:X86 Xen CPUs
On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
> inherit the owner of the directory.
Ah... so that's why the previous patch is there.
This is not the right way to fix it. The QEMU Xen support is *assuming*
that QEMU is either running in, or emulating, dom0. In the emulation
case this is probably fine, but the 'real Xen' case it should be using
the correct domid for node creation. I guess this could either be
supplied on the command line or discerned by reading the local domain
'domid' node.
>
> Note that for other than Dom0 domain (non toolstack domain) the
> "driver_domain" property should be set in domain config file for the
> toolstack to create required directories in advance.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> ---
> hw/xen/xen_pvdev.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/hw/xen/xen_pvdev.c b/hw/xen/xen_pvdev.c
> index c5ad71e8dc..42bdd4f6c8 100644
> --- a/hw/xen/xen_pvdev.c
> +++ b/hw/xen/xen_pvdev.c
> @@ -60,7 +60,8 @@ void xen_config_cleanup(void)
>
> int xenstore_mkdir(char *path, int p)
> {
> - if (!qemu_xen_xs_create(xenstore, 0, 0, xen_domid, p, path)) {
> + if (!qemu_xen_xs_create(xenstore, 0, XS_PRESERVE_OWNER,
> + xen_domid, p, path)) {
> xen_pv_printf(NULL, 0, "xs_mkdir %s: failed\n", path);
> return -1;
> }
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-22 17:11 ` Paul Durrant
@ 2023-11-22 22:29 ` Stefano Stabellini
2023-11-22 23:03 ` David Woodhouse
0 siblings, 1 reply; 52+ messages in thread
From: Stefano Stabellini @ 2023-11-22 22:29 UTC (permalink / raw)
To: paul
Cc: Volodymyr Babchuk, qemu-devel@nongnu.org, David Woodhouse,
Stefano Stabellini, Julien Grall, Oleksandr Tyshchenko,
Anthony Perard, open list:X86 Xen CPUs
On Wed, 22 Nov 2023, Paul Durrant wrote:
> On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> >
> > Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
> > inherit the owner of the directory.
>
> Ah... so that's why the previous patch is there.
>
> This is not the right way to fix it. The QEMU Xen support is *assuming* that
> QEMU is either running in, or emulating, dom0. In the emulation case this is
> probably fine, but the 'real Xen' case it should be using the correct domid
> for node creation. I guess this could either be supplied on the command line
> or discerned by reading the local domain 'domid' node.
yes, it should be passed as command line option to QEMU
> > Note that for other than Dom0 domain (non toolstack domain) the
> > "driver_domain" property should be set in domain config file for the
> > toolstack to create required directories in advance.
> >
> > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> > ---
> > hw/xen/xen_pvdev.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/xen/xen_pvdev.c b/hw/xen/xen_pvdev.c
> > index c5ad71e8dc..42bdd4f6c8 100644
> > --- a/hw/xen/xen_pvdev.c
> > +++ b/hw/xen/xen_pvdev.c
> > @@ -60,7 +60,8 @@ void xen_config_cleanup(void)
> > int xenstore_mkdir(char *path, int p)
> > {
> > - if (!qemu_xen_xs_create(xenstore, 0, 0, xen_domid, p, path)) {
> > + if (!qemu_xen_xs_create(xenstore, 0, XS_PRESERVE_OWNER,
> > + xen_domid, p, path)) {
> > xen_pv_printf(NULL, 0, "xs_mkdir %s: failed\n", path);
> > return -1;
> > }
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-22 22:29 ` Stefano Stabellini
@ 2023-11-22 23:03 ` David Woodhouse
2023-11-22 23:09 ` Stefano Stabellini
0 siblings, 1 reply; 52+ messages in thread
From: David Woodhouse @ 2023-11-22 23:03 UTC (permalink / raw)
To: Stefano Stabellini, paul
Cc: Volodymyr Babchuk, qemu-devel@nongnu.org, Julien Grall,
Oleksandr Tyshchenko, Anthony Perard, open list:X86 Xen CPUs
[-- Attachment #1: Type: text/plain, Size: 973 bytes --]
On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
> On Wed, 22 Nov 2023, Paul Durrant wrote:
> > On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > >
> > > Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
> > > inherit the owner of the directory.
> >
> > Ah... so that's why the previous patch is there.
> >
> > This is not the right way to fix it. The QEMU Xen support is *assuming* that
> > QEMU is either running in, or emulating, dom0. In the emulation case this is
> > probably fine, but the 'real Xen' case it should be using the correct domid
> > for node creation. I guess this could either be supplied on the command line
> > or discerned by reading the local domain 'domid' node.
>
> yes, it should be passed as command line option to QEMU
I'm not sure I like the idea of a command line option for something
which QEMU could discover for itself.
[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5965 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-22 23:03 ` David Woodhouse
@ 2023-11-22 23:09 ` Stefano Stabellini
2023-11-22 23:11 ` David Woodhouse
0 siblings, 1 reply; 52+ messages in thread
From: Stefano Stabellini @ 2023-11-22 23:09 UTC (permalink / raw)
To: David Woodhouse
Cc: Stefano Stabellini, paul, Volodymyr Babchuk,
qemu-devel@nongnu.org, Julien Grall, Oleksandr Tyshchenko,
Anthony Perard, open list:X86 Xen CPUs
On Wed, 22 Nov 2023, David Woodhouse wrote:
> On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
> > On Wed, 22 Nov 2023, Paul Durrant wrote:
> > > On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > >
> > > > Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
> > > > inherit the owner of the directory.
> > >
> > > Ah... so that's why the previous patch is there.
> > >
> > > This is not the right way to fix it. The QEMU Xen support is *assuming* that
> > > QEMU is either running in, or emulating, dom0. In the emulation case this is
> > > probably fine, but the 'real Xen' case it should be using the correct domid
> > > for node creation. I guess this could either be supplied on the command line
> > > or discerned by reading the local domain 'domid' node.
> >
> > yes, it should be passed as command line option to QEMU
>
> I'm not sure I like the idea of a command line option for something
> which QEMU could discover for itself.
That's fine too. I meant to say "yes, as far as I know the toolstack
passes the domid to QEMU as a command line option today".
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-22 23:09 ` Stefano Stabellini
@ 2023-11-22 23:11 ` David Woodhouse
2023-11-22 23:20 ` Stefano Stabellini
0 siblings, 1 reply; 52+ messages in thread
From: David Woodhouse @ 2023-11-22 23:11 UTC (permalink / raw)
To: Stefano Stabellini
Cc: paul, Volodymyr Babchuk, qemu-devel@nongnu.org, Julien Grall,
Oleksandr Tyshchenko, Anthony Perard, open list:X86 Xen CPUs
[-- Attachment #1: Type: text/plain, Size: 1485 bytes --]
On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
> On Wed, 22 Nov 2023, David Woodhouse wrote:
> > On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
> > > On Wed, 22 Nov 2023, Paul Durrant wrote:
> > > > On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> > > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > > >
> > > > > Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
> > > > > inherit the owner of the directory.
> > > >
> > > > Ah... so that's why the previous patch is there.
> > > >
> > > > This is not the right way to fix it. The QEMU Xen support is *assuming* that
> > > > QEMU is either running in, or emulating, dom0. In the emulation case this is
> > > > probably fine, but the 'real Xen' case it should be using the correct domid
> > > > for node creation. I guess this could either be supplied on the command line
> > > > or discerned by reading the local domain 'domid' node.
> > >
> > > yes, it should be passed as command line option to QEMU
> >
> > I'm not sure I like the idea of a command line option for something
> > which QEMU could discover for itself.
>
> That's fine too. I meant to say "yes, as far as I know the toolstack
> passes the domid to QEMU as a command line option today".
The -xen-domid argument on the QEMU command line today is the *guest*
domain ID, not the domain ID in which QEMU itself is running.
Or were you thinking of something different?
[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5965 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-22 23:11 ` David Woodhouse
@ 2023-11-22 23:20 ` Stefano Stabellini
2023-11-22 23:46 ` Volodymyr Babchuk
0 siblings, 1 reply; 52+ messages in thread
From: Stefano Stabellini @ 2023-11-22 23:20 UTC (permalink / raw)
To: David Woodhouse
Cc: Stefano Stabellini, paul, Volodymyr Babchuk,
qemu-devel@nongnu.org, Julien Grall, Oleksandr Tyshchenko,
Anthony Perard, open list:X86 Xen CPUs
On Wed, 22 Nov 2023, David Woodhouse wrote:
> On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
> > On Wed, 22 Nov 2023, David Woodhouse wrote:
> > > On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
> > > > On Wed, 22 Nov 2023, Paul Durrant wrote:
> > > > > On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> > > > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > > > >
> > > > > > Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
> > > > > > inherit the owner of the directory.
> > > > >
> > > > > Ah... so that's why the previous patch is there.
> > > > >
> > > > > This is not the right way to fix it. The QEMU Xen support is *assuming* that
> > > > > QEMU is either running in, or emulating, dom0. In the emulation case this is
> > > > > probably fine, but the 'real Xen' case it should be using the correct domid
> > > > > for node creation. I guess this could either be supplied on the command line
> > > > > or discerned by reading the local domain 'domid' node.
> > > >
> > > > yes, it should be passed as command line option to QEMU
> > >
> > > I'm not sure I like the idea of a command line option for something
> > > which QEMU could discover for itself.
> >
> > That's fine too. I meant to say "yes, as far as I know the toolstack
> > passes the domid to QEMU as a command line option today".
>
> The -xen-domid argument on the QEMU command line today is the *guest*
> domain ID, not the domain ID in which QEMU itself is running.
>
> Or were you thinking of something different?
Ops, you are right and I understand your comment better now. The backend
domid is not on the command line but it should be discoverable (on
xenstore if I remember right).
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-22 23:20 ` Stefano Stabellini
@ 2023-11-22 23:46 ` Volodymyr Babchuk
2023-11-23 0:07 ` Volodymyr Babchuk
0 siblings, 1 reply; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-22 23:46 UTC (permalink / raw)
To: Stefano Stabellini
Cc: David Woodhouse, paul@xen.org, qemu-devel@nongnu.org,
Julien Grall, Oleksandr Tyshchenko, Anthony Perard,
open list:X86 Xen CPUs
Hi Stefano,
Stefano Stabellini <sstabellini@kernel.org> writes:
> On Wed, 22 Nov 2023, David Woodhouse wrote:
>> On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
>> > On Wed, 22 Nov 2023, David Woodhouse wrote:
>> > > On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
>> > > > On Wed, 22 Nov 2023, Paul Durrant wrote:
>> > > > > On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>> > > > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> > > > > >
>> > > > > > Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
>> > > > > > inherit the owner of the directory.
>> > > > >
>> > > > > Ah... so that's why the previous patch is there.
>> > > > >
>> > > > > This is not the right way to fix it. The QEMU Xen support is *assuming* that
>> > > > > QEMU is either running in, or emulating, dom0. In the emulation case this is
>> > > > > probably fine, but the 'real Xen' case it should be using the correct domid
>> > > > > for node creation. I guess this could either be supplied on the command line
>> > > > > or discerned by reading the local domain 'domid' node.
>> > > >
>> > > > yes, it should be passed as command line option to QEMU
>> > >
>> > > I'm not sure I like the idea of a command line option for something
>> > > which QEMU could discover for itself.
>> >
>> > That's fine too. I meant to say "yes, as far as I know the toolstack
>> > passes the domid to QEMU as a command line option today".
>>
>> The -xen-domid argument on the QEMU command line today is the *guest*
>> domain ID, not the domain ID in which QEMU itself is running.
>>
>> Or were you thinking of something different?
>
> Ops, you are right and I understand your comment better now. The backend
> domid is not on the command line but it should be discoverable (on
> xenstore if I remember right).
Yes, it is just "~/domid". I'll add a function that reads it.
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-22 23:46 ` Volodymyr Babchuk
@ 2023-11-23 0:07 ` Volodymyr Babchuk
2023-11-23 9:28 ` Paul Durrant
2023-11-24 12:56 ` Alex Bennée
0 siblings, 2 replies; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-23 0:07 UTC (permalink / raw)
To: Volodymyr Babchuk, David Woodhouse
Cc: Stefano Stabellini, David Woodhouse, paul@xen.org,
qemu-devel@nongnu.org, Julien Grall, Oleksandr Tyshchenko,
Anthony Perard, open list:X86 Xen CPUs
Hi,
Volodymyr Babchuk <volodymyr_babchuk@epam.com> writes:
> Hi Stefano,
>
> Stefano Stabellini <sstabellini@kernel.org> writes:
>
>> On Wed, 22 Nov 2023, David Woodhouse wrote:
>>> On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
>>> > On Wed, 22 Nov 2023, David Woodhouse wrote:
>>> > > On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
>>> > > > On Wed, 22 Nov 2023, Paul Durrant wrote:
>>> > > > > On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>>> > > > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>> > > > > >
>>> > > > > > Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
>>> > > > > > inherit the owner of the directory.
>>> > > > >
>>> > > > > Ah... so that's why the previous patch is there.
>>> > > > >
>>> > > > > This is not the right way to fix it. The QEMU Xen support is *assuming* that
>>> > > > > QEMU is either running in, or emulating, dom0. In the emulation case this is
>>> > > > > probably fine, but the 'real Xen' case it should be using the correct domid
>>> > > > > for node creation. I guess this could either be supplied on the command line
>>> > > > > or discerned by reading the local domain 'domid' node.
>>> > > >
>>> > > > yes, it should be passed as command line option to QEMU
>>> > >
>>> > > I'm not sure I like the idea of a command line option for something
>>> > > which QEMU could discover for itself.
>>> >
>>> > That's fine too. I meant to say "yes, as far as I know the toolstack
>>> > passes the domid to QEMU as a command line option today".
>>>
>>> The -xen-domid argument on the QEMU command line today is the *guest*
>>> domain ID, not the domain ID in which QEMU itself is running.
>>>
>>> Or were you thinking of something different?
>>
>> Ops, you are right and I understand your comment better now. The backend
>> domid is not on the command line but it should be discoverable (on
>> xenstore if I remember right).
>
> Yes, it is just "~/domid". I'll add a function that reads it.
Just a quick question to QEMU folks: is it better to add a global
variable where we will store own Domain ID or it will be okay to read
domid from Xenstore every time we need it?
If global variable variant is better, what is proffered place to define
this variable? system/globals.c ?
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-23 0:07 ` Volodymyr Babchuk
@ 2023-11-23 9:28 ` Paul Durrant
2023-11-23 10:45 ` David Woodhouse
2023-11-23 11:54 ` Volodymyr Babchuk
2023-11-24 12:56 ` Alex Bennée
1 sibling, 2 replies; 52+ messages in thread
From: Paul Durrant @ 2023-11-23 9:28 UTC (permalink / raw)
To: Volodymyr Babchuk, David Woodhouse
Cc: Stefano Stabellini, qemu-devel@nongnu.org, Julien Grall,
Oleksandr Tyshchenko, Anthony Perard, open list:X86 Xen CPUs
On 23/11/2023 00:07, Volodymyr Babchuk wrote:
>
> Hi,
>
> Volodymyr Babchuk <volodymyr_babchuk@epam.com> writes:
>
>> Hi Stefano,
>>
>> Stefano Stabellini <sstabellini@kernel.org> writes:
>>
>>> On Wed, 22 Nov 2023, David Woodhouse wrote:
>>>> On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
>>>>> On Wed, 22 Nov 2023, David Woodhouse wrote:
>>>>>> On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
>>>>>>> On Wed, 22 Nov 2023, Paul Durrant wrote:
>>>>>>>> On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>>>>>>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>>>>>
>>>>>>>>> Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
>>>>>>>>> inherit the owner of the directory.
>>>>>>>>
>>>>>>>> Ah... so that's why the previous patch is there.
>>>>>>>>
>>>>>>>> This is not the right way to fix it. The QEMU Xen support is *assuming* that
>>>>>>>> QEMU is either running in, or emulating, dom0. In the emulation case this is
>>>>>>>> probably fine, but the 'real Xen' case it should be using the correct domid
>>>>>>>> for node creation. I guess this could either be supplied on the command line
>>>>>>>> or discerned by reading the local domain 'domid' node.
>>>>>>>
>>>>>>> yes, it should be passed as command line option to QEMU
>>>>>>
>>>>>> I'm not sure I like the idea of a command line option for something
>>>>>> which QEMU could discover for itself.
>>>>>
>>>>> That's fine too. I meant to say "yes, as far as I know the toolstack
>>>>> passes the domid to QEMU as a command line option today".
>>>>
>>>> The -xen-domid argument on the QEMU command line today is the *guest*
>>>> domain ID, not the domain ID in which QEMU itself is running.
>>>>
>>>> Or were you thinking of something different?
>>>
>>> Ops, you are right and I understand your comment better now. The backend
>>> domid is not on the command line but it should be discoverable (on
>>> xenstore if I remember right).
>>
>> Yes, it is just "~/domid". I'll add a function that reads it.
>
> Just a quick question to QEMU folks: is it better to add a global
> variable where we will store own Domain ID or it will be okay to read
> domid from Xenstore every time we need it?
>
> If global variable variant is better, what is proffered place to define
> this variable? system/globals.c ?
>
Actually... is it possible for QEMU just to use a relative path for the
backend nodes? That way it won't need to know it's own domid, will it?
Paul
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-23 9:28 ` Paul Durrant
@ 2023-11-23 10:45 ` David Woodhouse
2023-11-23 11:43 ` Volodymyr Babchuk
2023-11-23 11:54 ` Volodymyr Babchuk
1 sibling, 1 reply; 52+ messages in thread
From: David Woodhouse @ 2023-11-23 10:45 UTC (permalink / raw)
To: paul, Volodymyr Babchuk
Cc: Stefano Stabellini, qemu-devel@nongnu.org, Julien Grall,
Oleksandr Tyshchenko, Anthony Perard, open list:X86 Xen CPUs
[-- Attachment #1: Type: text/plain, Size: 3066 bytes --]
On Thu, 2023-11-23 at 09:28 +0000, Paul Durrant wrote:
> On 23/11/2023 00:07, Volodymyr Babchuk wrote:
> >
> > Hi,
> >
> > Volodymyr Babchuk <volodymyr_babchuk@epam.com> writes:
> >
> > > Hi Stefano,
> > >
> > > Stefano Stabellini <sstabellini@kernel.org> writes:
> > >
> > > > On Wed, 22 Nov 2023, David Woodhouse wrote:
> > > > > On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
> > > > > > On Wed, 22 Nov 2023, David Woodhouse wrote:
> > > > > > > On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
> > > > > > > > On Wed, 22 Nov 2023, Paul Durrant wrote:
> > > > > > > > > On 21/11/2023 22:10, Volodymyr Babchuk wrote:
> > > > > > > > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > > > > > > > >
> > > > > > > > > > Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
> > > > > > > > > > inherit the owner of the directory.
> > > > > > > > >
> > > > > > > > > Ah... so that's why the previous patch is there.
> > > > > > > > >
> > > > > > > > > This is not the right way to fix it. The QEMU Xen support is *assuming* that
> > > > > > > > > QEMU is either running in, or emulating, dom0. In the emulation case this is
> > > > > > > > > probably fine, but the 'real Xen' case it should be using the correct domid
> > > > > > > > > for node creation. I guess this could either be supplied on the command line
> > > > > > > > > or discerned by reading the local domain 'domid' node.
> > > > > > > >
> > > > > > > > yes, it should be passed as command line option to QEMU
> > > > > > >
> > > > > > > I'm not sure I like the idea of a command line option for something
> > > > > > > which QEMU could discover for itself.
> > > > > >
> > > > > > That's fine too. I meant to say "yes, as far as I know the toolstack
> > > > > > passes the domid to QEMU as a command line option today".
> > > > >
> > > > > The -xen-domid argument on the QEMU command line today is the *guest*
> > > > > domain ID, not the domain ID in which QEMU itself is running.
> > > > >
> > > > > Or were you thinking of something different?
> > > >
> > > > Ops, you are right and I understand your comment better now. The backend
> > > > domid is not on the command line but it should be discoverable (on
> > > > xenstore if I remember right).
> > >
> > > Yes, it is just "~/domid". I'll add a function that reads it.
> >
> > Just a quick question to QEMU folks: is it better to add a global
> > variable where we will store own Domain ID or it will be okay to read
> > domid from Xenstore every time we need it?
> >
> > If global variable variant is better, what is proffered place to define
> > this variable? system/globals.c ?
> >
>
> Actually... is it possible for QEMU just to use a relative path for the
> backend nodes? That way it won't need to know it's own domid, will it?
That covers some of the use cases, but it may also need to know its own
domid for other purposes. Including writing the *absolute* path of the
backend, to a frontend node?
[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5965 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-23 10:45 ` David Woodhouse
@ 2023-11-23 11:43 ` Volodymyr Babchuk
2023-11-23 11:51 ` David Woodhouse
0 siblings, 1 reply; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-23 11:43 UTC (permalink / raw)
To: David Woodhouse
Cc: paul@xen.org, Stefano Stabellini, qemu-devel@nongnu.org,
Julien Grall, Oleksandr Tyshchenko, Anthony Perard,
open list:X86 Xen CPUs
Hi David,
David Woodhouse <dwmw2@infradead.org> writes:
> [[S/MIME Signed Part:Undecided]]
> On Thu, 2023-11-23 at 09:28 +0000, Paul Durrant wrote:
>> On 23/11/2023 00:07, Volodymyr Babchuk wrote:
>> >
>> > Hi,
>> >
>> > Volodymyr Babchuk <volodymyr_babchuk@epam.com> writes:
>> >
>> > > Hi Stefano,
>> > >
>> > > Stefano Stabellini <sstabellini@kernel.org> writes:
>> > >
>> > > > On Wed, 22 Nov 2023, David Woodhouse wrote:
>> > > > > On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
>> > > > > > On Wed, 22 Nov 2023, David Woodhouse wrote:
>> > > > > > > On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
>> > > > > > > > On Wed, 22 Nov 2023, Paul Durrant wrote:
>> > > > > > > > > On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>> > > > > > > > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> > > > > > > > > >
>> > > > > > > > > > Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
>> > > > > > > > > > inherit the owner of the directory.
>> > > > > > > > >
>> > > > > > > > > Ah... so that's why the previous patch is there.
>> > > > > > > > >
>> > > > > > > > > This is not the right way to fix it. The QEMU Xen support is *assuming* that
>> > > > > > > > > QEMU is either running in, or emulating, dom0. In the emulation case this is
>> > > > > > > > > probably fine, but the 'real Xen' case it should be using the correct domid
>> > > > > > > > > for node creation. I guess this could either be supplied on the command line
>> > > > > > > > > or discerned by reading the local domain 'domid' node.
>> > > > > > > >
>> > > > > > > > yes, it should be passed as command line option to QEMU
>> > > > > > >
>> > > > > > > I'm not sure I like the idea of a command line option for something
>> > > > > > > which QEMU could discover for itself.
>> > > > > >
>> > > > > > That's fine too. I meant to say "yes, as far as I know the toolstack
>> > > > > > passes the domid to QEMU as a command line option today".
>> > > > >
>> > > > > The -xen-domid argument on the QEMU command line today is the *guest*
>> > > > > domain ID, not the domain ID in which QEMU itself is running.
>> > > > >
>> > > > > Or were you thinking of something different?
>> > > >
>> > > > Ops, you are right and I understand your comment better now. The backend
>> > > > domid is not on the command line but it should be discoverable (on
>> > > > xenstore if I remember right).
>> > >
>> > > Yes, it is just "~/domid". I'll add a function that reads it.
>> >
>> > Just a quick question to QEMU folks: is it better to add a global
>> > variable where we will store own Domain ID or it will be okay to read
>> > domid from Xenstore every time we need it?
>> >
>> > If global variable variant is better, what is proffered place to define
>> > this variable? system/globals.c ?
>> >
>>
>> Actually... is it possible for QEMU just to use a relative path for the
>> backend nodes? That way it won't need to know it's own domid, will it?
>
> That covers some of the use cases, but it may also need to know its own
> domid for other purposes. Including writing the *absolute* path of the
> backend, to a frontend node?
Is this case possible? As I understand, QEMU writes frontend nodes only
when it emulates Xen, otherwise this done by Xen toolstack. And in case
of Xen emulation, QEMU always assumes role of Domain-0.
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-23 11:43 ` Volodymyr Babchuk
@ 2023-11-23 11:51 ` David Woodhouse
0 siblings, 0 replies; 52+ messages in thread
From: David Woodhouse @ 2023-11-23 11:51 UTC (permalink / raw)
To: Volodymyr Babchuk
Cc: paul@xen.org, Stefano Stabellini, qemu-devel@nongnu.org,
Julien Grall, Oleksandr Tyshchenko, Anthony Perard,
open list:X86 Xen CPUs
On 23 November 2023 11:43:35 GMT, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> wrote:
>
>Hi David,
>
>David Woodhouse <dwmw2@infradead.org> writes:
>
>> [[S/MIME Signed Part:Undecided]]
>> On Thu, 2023-11-23 at 09:28 +0000, Paul Durrant wrote:
>>> On 23/11/2023 00:07, Volodymyr Babchuk wrote:
>>> >
>>> > Hi,
>>> >
>>> > Volodymyr Babchuk <volodymyr_babchuk@epam.com> writes:
>>> >
>>> > > Hi Stefano,
>>> > >
>>> > > Stefano Stabellini <sstabellini@kernel.org> writes:
>>> > >
>>> > > > On Wed, 22 Nov 2023, David Woodhouse wrote:
>>> > > > > On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
>>> > > > > > On Wed, 22 Nov 2023, David Woodhouse wrote:
>>> > > > > > > On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
>>> > > > > > > > On Wed, 22 Nov 2023, Paul Durrant wrote:
>>> > > > > > > > > On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>>> > > > > > > > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>> > > > > > > > > >
>>> > > > > > > > > > Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
>>> > > > > > > > > > inherit the owner of the directory.
>>> > > > > > > > >
>>> > > > > > > > > Ah... so that's why the previous patch is there.
>>> > > > > > > > >
>>> > > > > > > > > This is not the right way to fix it. The QEMU Xen support is *assuming* that
>>> > > > > > > > > QEMU is either running in, or emulating, dom0. In the emulation case this is
>>> > > > > > > > > probably fine, but the 'real Xen' case it should be using the correct domid
>>> > > > > > > > > for node creation. I guess this could either be supplied on the command line
>>> > > > > > > > > or discerned by reading the local domain 'domid' node.
>>> > > > > > > >
>>> > > > > > > > yes, it should be passed as command line option to QEMU
>>> > > > > > >
>>> > > > > > > I'm not sure I like the idea of a command line option for something
>>> > > > > > > which QEMU could discover for itself.
>>> > > > > >
>>> > > > > > That's fine too. I meant to say "yes, as far as I know the toolstack
>>> > > > > > passes the domid to QEMU as a command line option today".
>>> > > > >
>>> > > > > The -xen-domid argument on the QEMU command line today is the *guest*
>>> > > > > domain ID, not the domain ID in which QEMU itself is running.
>>> > > > >
>>> > > > > Or were you thinking of something different?
>>> > > >
>>> > > > Ops, you are right and I understand your comment better now. The backend
>>> > > > domid is not on the command line but it should be discoverable (on
>>> > > > xenstore if I remember right).
>>> > >
>>> > > Yes, it is just "~/domid". I'll add a function that reads it.
>>> >
>>> > Just a quick question to QEMU folks: is it better to add a global
>>> > variable where we will store own Domain ID or it will be okay to read
>>> > domid from Xenstore every time we need it?
>>> >
>>> > If global variable variant is better, what is proffered place to define
>>> > this variable? system/globals.c ?
>>> >
>>>
>>> Actually... is it possible for QEMU just to use a relative path for the
>>> backend nodes? That way it won't need to know it's own domid, will it?
>>
>> That covers some of the use cases, but it may also need to know its own
>> domid for other purposes. Including writing the *absolute* path of the
>> backend, to a frontend node?
>
>Is this case possible? As I understand, QEMU writes frontend nodes only
>when it emulates Xen, otherwise this done by Xen toolstack. And in case
>of Xen emulation, QEMU always assumes role of Domain-0.
No, you can hotplug and unplug devices in QEMU even under real Xen. And if QEMU in a driver domain is given sufficient permission to write to its guest's frontend nodes, it should not need to be in dom0.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-23 9:28 ` Paul Durrant
2023-11-23 10:45 ` David Woodhouse
@ 2023-11-23 11:54 ` Volodymyr Babchuk
2023-11-23 11:57 ` David Woodhouse
1 sibling, 1 reply; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-23 11:54 UTC (permalink / raw)
To: paul@xen.org
Cc: David Woodhouse, Stefano Stabellini, qemu-devel@nongnu.org,
Julien Grall, Oleksandr Tyshchenko, Anthony Perard,
open list:X86 Xen CPUs
Hi Paul,
Paul Durrant <xadimgnik@gmail.com> writes:
> On 23/11/2023 00:07, Volodymyr Babchuk wrote:
>> Hi,
>> Volodymyr Babchuk <volodymyr_babchuk@epam.com> writes:
>>
>>> Hi Stefano,
>>>
>>> Stefano Stabellini <sstabellini@kernel.org> writes:
>>>
>>>> On Wed, 22 Nov 2023, David Woodhouse wrote:
>>>>> On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
>>>>>> On Wed, 22 Nov 2023, David Woodhouse wrote:
>>>>>>> On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
>>>>>>>> On Wed, 22 Nov 2023, Paul Durrant wrote:
>>>>>>>>> On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>>>>>>>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>>>>>>
>>>>>>>>>> Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
>>>>>>>>>> inherit the owner of the directory.
>>>>>>>>>
>>>>>>>>> Ah... so that's why the previous patch is there.
>>>>>>>>>
>>>>>>>>> This is not the right way to fix it. The QEMU Xen support is *assuming* that
>>>>>>>>> QEMU is either running in, or emulating, dom0. In the emulation case this is
>>>>>>>>> probably fine, but the 'real Xen' case it should be using the correct domid
>>>>>>>>> for node creation. I guess this could either be supplied on the command line
>>>>>>>>> or discerned by reading the local domain 'domid' node.
>>>>>>>>
>>>>>>>> yes, it should be passed as command line option to QEMU
>>>>>>>
>>>>>>> I'm not sure I like the idea of a command line option for something
>>>>>>> which QEMU could discover for itself.
>>>>>>
>>>>>> That's fine too. I meant to say "yes, as far as I know the toolstack
>>>>>> passes the domid to QEMU as a command line option today".
>>>>>
>>>>> The -xen-domid argument on the QEMU command line today is the *guest*
>>>>> domain ID, not the domain ID in which QEMU itself is running.
>>>>>
>>>>> Or were you thinking of something different?
>>>>
>>>> Ops, you are right and I understand your comment better now. The backend
>>>> domid is not on the command line but it should be discoverable (on
>>>> xenstore if I remember right).
>>>
>>> Yes, it is just "~/domid". I'll add a function that reads it.
>> Just a quick question to QEMU folks: is it better to add a global
>> variable where we will store own Domain ID or it will be okay to read
>> domid from Xenstore every time we need it?
>> If global variable variant is better, what is proffered place to
>> define
>> this variable? system/globals.c ?
>>
>
> Actually... is it possible for QEMU just to use a relative path for
> the backend nodes? That way it won't need to know it's own domid, will
> it?
Well, it is possible to use relative path, AFAIK, linux-based backends
are doing exactly this. But problem is with xenstore_mkdir() function,
which requires domain id to correctly set owner when it causes call to
set_permissions().
As David said, architecturally it will be better to get rid of
xenstore_mkdir() usage, because it is used by legacy backends only. But
to do this, someone needs to convert legacy backends to use newer API. I
have no capacity to do this right now, so I implemented a contained
solution:
static int xenstore_read_own_domid(unsigned int *domid)
in xen_pvdev.c. I believe, this new function will be removed along with
whole xen_pvdev.c when there will be no legacy backends left.
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-23 11:54 ` Volodymyr Babchuk
@ 2023-11-23 11:57 ` David Woodhouse
2023-11-23 12:17 ` Volodymyr Babchuk
0 siblings, 1 reply; 52+ messages in thread
From: David Woodhouse @ 2023-11-23 11:57 UTC (permalink / raw)
To: Volodymyr Babchuk, paul@xen.org
Cc: Stefano Stabellini, qemu-devel@nongnu.org, Julien Grall,
Oleksandr Tyshchenko, Anthony Perard, open list:X86 Xen CPUs
On 23 November 2023 11:54:01 GMT, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> wrote:
>
>Hi Paul,
>
>Paul Durrant <xadimgnik@gmail.com> writes:
>
>> On 23/11/2023 00:07, Volodymyr Babchuk wrote:
>>> Hi,
>>> Volodymyr Babchuk <volodymyr_babchuk@epam.com> writes:
>>>
>>>> Hi Stefano,
>>>>
>>>> Stefano Stabellini <sstabellini@kernel.org> writes:
>>>>
>>>>> On Wed, 22 Nov 2023, David Woodhouse wrote:
>>>>>> On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
>>>>>>> On Wed, 22 Nov 2023, David Woodhouse wrote:
>>>>>>>> On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
>>>>>>>>> On Wed, 22 Nov 2023, Paul Durrant wrote:
>>>>>>>>>> On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>>>>>>>>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>>>>>>>
>>>>>>>>>>> Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
>>>>>>>>>>> inherit the owner of the directory.
>>>>>>>>>>
>>>>>>>>>> Ah... so that's why the previous patch is there.
>>>>>>>>>>
>>>>>>>>>> This is not the right way to fix it. The QEMU Xen support is *assuming* that
>>>>>>>>>> QEMU is either running in, or emulating, dom0. In the emulation case this is
>>>>>>>>>> probably fine, but the 'real Xen' case it should be using the correct domid
>>>>>>>>>> for node creation. I guess this could either be supplied on the command line
>>>>>>>>>> or discerned by reading the local domain 'domid' node.
>>>>>>>>>
>>>>>>>>> yes, it should be passed as command line option to QEMU
>>>>>>>>
>>>>>>>> I'm not sure I like the idea of a command line option for something
>>>>>>>> which QEMU could discover for itself.
>>>>>>>
>>>>>>> That's fine too. I meant to say "yes, as far as I know the toolstack
>>>>>>> passes the domid to QEMU as a command line option today".
>>>>>>
>>>>>> The -xen-domid argument on the QEMU command line today is the *guest*
>>>>>> domain ID, not the domain ID in which QEMU itself is running.
>>>>>>
>>>>>> Or were you thinking of something different?
>>>>>
>>>>> Ops, you are right and I understand your comment better now. The backend
>>>>> domid is not on the command line but it should be discoverable (on
>>>>> xenstore if I remember right).
>>>>
>>>> Yes, it is just "~/domid". I'll add a function that reads it.
>>> Just a quick question to QEMU folks: is it better to add a global
>>> variable where we will store own Domain ID or it will be okay to read
>>> domid from Xenstore every time we need it?
>>> If global variable variant is better, what is proffered place to
>>> define
>>> this variable? system/globals.c ?
>>>
>>
>> Actually... is it possible for QEMU just to use a relative path for
>> the backend nodes? That way it won't need to know it's own domid, will
>> it?
>
>Well, it is possible to use relative path, AFAIK, linux-based backends
>are doing exactly this. But problem is with xenstore_mkdir() function,
>which requires domain id to correctly set owner when it causes call to
>set_permissions().
>
>As David said, architecturally it will be better to get rid of
>xenstore_mkdir() usage, because it is used by legacy backends only. But
>to do this, someone needs to convert legacy backends to use newer API. I
>have no capacity to do this right now, so I implemented a contained
>solution:
>
>static int xenstore_read_own_domid(unsigned int *domid)
>
>in xen_pvdev.c. I believe, this new function will be removed along with
>whole xen_pvdev.c when there will be no legacy backends left.
Which PV backends do you care about? We already have net, block and console converted.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-23 11:57 ` David Woodhouse
@ 2023-11-23 12:17 ` Volodymyr Babchuk
2023-11-23 12:27 ` David Woodhouse
0 siblings, 1 reply; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-23 12:17 UTC (permalink / raw)
To: David Woodhouse
Cc: paul@xen.org, Stefano Stabellini, qemu-devel@nongnu.org,
Julien Grall, Oleksandr Tyshchenko, Anthony Perard,
xen-devel@lists.xenproject.org
Hi David,
David Woodhouse <dwmw2@infradead.org> writes:
> On 23 November 2023 11:54:01 GMT, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> wrote:
>>
>>Hi Paul,
>>
>>Paul Durrant <xadimgnik@gmail.com> writes:
>>
>>> On 23/11/2023 00:07, Volodymyr Babchuk wrote:
>>>> Hi,
>>>> Volodymyr Babchuk <volodymyr_babchuk@epam.com> writes:
>>>>
>>>>> Hi Stefano,
>>>>>
>>>>> Stefano Stabellini <sstabellini@kernel.org> writes:
>>>>>
>>>>>> On Wed, 22 Nov 2023, David Woodhouse wrote:
>>>>>>> On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
>>>>>>>> On Wed, 22 Nov 2023, David Woodhouse wrote:
>>>>>>>>> On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
>>>>>>>>>> On Wed, 22 Nov 2023, Paul Durrant wrote:
>>>>>>>>>>> On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>>>>>>>>>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>>>>>>>>
>>>>>>>>>>>> Instead of forcing the owner to domid 0, use XS_PRESERVE_OWNER to
>>>>>>>>>>>> inherit the owner of the directory.
>>>>>>>>>>>
>>>>>>>>>>> Ah... so that's why the previous patch is there.
>>>>>>>>>>>
>>>>>>>>>>> This is not the right way to fix it. The QEMU Xen support is *assuming* that
>>>>>>>>>>> QEMU is either running in, or emulating, dom0. In the emulation case this is
>>>>>>>>>>> probably fine, but the 'real Xen' case it should be using the correct domid
>>>>>>>>>>> for node creation. I guess this could either be supplied on the command line
>>>>>>>>>>> or discerned by reading the local domain 'domid' node.
>>>>>>>>>>
>>>>>>>>>> yes, it should be passed as command line option to QEMU
>>>>>>>>>
>>>>>>>>> I'm not sure I like the idea of a command line option for something
>>>>>>>>> which QEMU could discover for itself.
>>>>>>>>
>>>>>>>> That's fine too. I meant to say "yes, as far as I know the toolstack
>>>>>>>> passes the domid to QEMU as a command line option today".
>>>>>>>
>>>>>>> The -xen-domid argument on the QEMU command line today is the *guest*
>>>>>>> domain ID, not the domain ID in which QEMU itself is running.
>>>>>>>
>>>>>>> Or were you thinking of something different?
>>>>>>
>>>>>> Ops, you are right and I understand your comment better now. The backend
>>>>>> domid is not on the command line but it should be discoverable (on
>>>>>> xenstore if I remember right).
>>>>>
>>>>> Yes, it is just "~/domid". I'll add a function that reads it.
>>>> Just a quick question to QEMU folks: is it better to add a global
>>>> variable where we will store own Domain ID or it will be okay to read
>>>> domid from Xenstore every time we need it?
>>>> If global variable variant is better, what is proffered place to
>>>> define
>>>> this variable? system/globals.c ?
>>>>
>>>
>>> Actually... is it possible for QEMU just to use a relative path for
>>> the backend nodes? That way it won't need to know it's own domid, will
>>> it?
>>
>>Well, it is possible to use relative path, AFAIK, linux-based backends
>>are doing exactly this. But problem is with xenstore_mkdir() function,
>>which requires domain id to correctly set owner when it causes call to
>>set_permissions().
>>
>>As David said, architecturally it will be better to get rid of
>>xenstore_mkdir() usage, because it is used by legacy backends only. But
>>to do this, someone needs to convert legacy backends to use newer API. I
>>have no capacity to do this right now, so I implemented a contained
>>solution:
>>
>>static int xenstore_read_own_domid(unsigned int *domid)
>>
>>in xen_pvdev.c. I believe, this new function will be removed along with
>>whole xen_pvdev.c when there will be no legacy backends left.
>
> Which PV backends do you care about? We already have net, block and console converted.
Well, this is all what we need, actually. Even console only will be
sufficient, as we are using QEMU to provide virtio-pci backends, so both
storage and networking should be provided by virtio. Are you proposing
to just drop this patch at all? I believe we can live without it, yes.
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-23 12:17 ` Volodymyr Babchuk
@ 2023-11-23 12:27 ` David Woodhouse
2023-11-23 12:54 ` Paul Durrant
2023-11-24 0:24 ` Volodymyr Babchuk
0 siblings, 2 replies; 52+ messages in thread
From: David Woodhouse @ 2023-11-23 12:27 UTC (permalink / raw)
To: Volodymyr Babchuk
Cc: paul@xen.org, Stefano Stabellini, qemu-devel@nongnu.org,
Julien Grall, Oleksandr Tyshchenko, Anthony Perard,
xen-devel@lists.xenproject.org
On 23 November 2023 12:17:57 GMT, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> wrote:
>
>Hi David,
>
>David Woodhouse <dwmw2@infradead.org> writes:
>> Which PV backends do you care about? We already have net, block and console converted.
>
>Well, this is all what we need, actually. Even console only will be
>sufficient, as we are using QEMU to provide virtio-pci backends, so both
>storage and networking should be provided by virtio. Are you proposing
>to just drop this patch at all? I believe we can live without it, yes.
Yeah, I think you can drop anything that's only needed for the legacy backends. I'm tempted to make a separate config option to compile those out.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-23 12:27 ` David Woodhouse
@ 2023-11-23 12:54 ` Paul Durrant
2023-11-24 0:24 ` Volodymyr Babchuk
1 sibling, 0 replies; 52+ messages in thread
From: Paul Durrant @ 2023-11-23 12:54 UTC (permalink / raw)
To: David Woodhouse, Volodymyr Babchuk
Cc: Stefano Stabellini, qemu-devel@nongnu.org, Julien Grall,
Oleksandr Tyshchenko, Anthony Perard,
xen-devel@lists.xenproject.org
On 23/11/2023 12:27, David Woodhouse wrote:
> On 23 November 2023 12:17:57 GMT, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> wrote:
>>
>> Hi David,
>>
>> David Woodhouse <dwmw2@infradead.org> writes:
>>> Which PV backends do you care about? We already have net, block and console converted.
>>
>> Well, this is all what we need, actually. Even console only will be
>> sufficient, as we are using QEMU to provide virtio-pci backends, so both
>> storage and networking should be provided by virtio. Are you proposing
>> to just drop this patch at all? I believe we can live without it, yes.
>
> Yeah, I think you can drop anything that's only needed for the legacy backends. I'm tempted to make a separate config option to compile those out.
>
I think that would be a good idea. The other legacy bacckend that we may
need to care about is xenfb... not so much the framebuffer itself, but
the mouse and keyboard aspects. The XENVKBD and XENHID drivers expose PV
mouse and keyboard to Windows instances so it's be nice if we can avoid
the backend withering away.
Paul
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-23 12:27 ` David Woodhouse
2023-11-23 12:54 ` Paul Durrant
@ 2023-11-24 0:24 ` Volodymyr Babchuk
1 sibling, 0 replies; 52+ messages in thread
From: Volodymyr Babchuk @ 2023-11-24 0:24 UTC (permalink / raw)
To: David Woodhouse
Cc: paul@xen.org, Stefano Stabellini, qemu-devel@nongnu.org,
Julien Grall, Oleksandr Tyshchenko, Anthony Perard,
xen-devel@lists.xenproject.org
Hi David,
David Woodhouse <dwmw2@infradead.org> writes:
> On 23 November 2023 12:17:57 GMT, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> wrote:
>>
>>Hi David,
>>
>>David Woodhouse <dwmw2@infradead.org> writes:
>>> Which PV backends do you care about? We already have net, block and console converted.
>>
>>Well, this is all what we need, actually. Even console only will be
>>sufficient, as we are using QEMU to provide virtio-pci backends, so both
>>storage and networking should be provided by virtio. Are you proposing
>>to just drop this patch at all? I believe we can live without it, yes.
>
> Yeah, I think you can drop anything that's only needed for the legacy backends. I'm tempted to make a separate config option to compile those out.
Yep, we need this. Because without the patch ("xen_pvdev: Do not assume
Dom0 when creating a directory") I can't run QEMU in the driver domain:
root@spider-domd:~# qemu-system-aarch64 -machine xenpv -m 128M
xen be core: xs_mkdir device-model/0/backends/vkbd: failed
xen be core: xs_mkdir device-model/0/backends/vkbd: failed
xen be core: xs_mkdir device-model/0/backends/9pfs: failed
xen be core: xs_mkdir device-model/0/backends/9pfs: failed
So yeah, we need something like CONFIG_XEN_LEGACY_BACKENDS option if we
don't want to fix xenstore_mkdir()
--
WBR, Volodymyr
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory
2023-11-23 0:07 ` Volodymyr Babchuk
2023-11-23 9:28 ` Paul Durrant
@ 2023-11-24 12:56 ` Alex Bennée
1 sibling, 0 replies; 52+ messages in thread
From: Alex Bennée @ 2023-11-24 12:56 UTC (permalink / raw)
To: Volodymyr Babchuk
Cc: David Woodhouse, Stefano Stabellini, paul@xen.org,
qemu-devel@nongnu.org, Julien Grall, Oleksandr Tyshchenko,
Anthony Perard, open list:X86 Xen CPUs
Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> writes:
> Hi,
>
> Volodymyr Babchuk <volodymyr_babchuk@epam.com> writes:
>
>> Hi Stefano,
>>
>> Stefano Stabellini <sstabellini@kernel.org> writes:
>>
>>> On Wed, 22 Nov 2023, David Woodhouse wrote:
>>>> On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
>>>> > On Wed, 22 Nov 2023, David Woodhouse wrote:
>>>> > > On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
>>>> > > > On Wed, 22 Nov 2023, Paul Durrant wrote:
>>>> > > > > On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>>>> > > > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
<snip>
>>>> The -xen-domid argument on the QEMU command line today is the *guest*
>>>> domain ID, not the domain ID in which QEMU itself is running.
>>>>
>>>> Or were you thinking of something different?
>>>
>>> Ops, you are right and I understand your comment better now. The backend
>>> domid is not on the command line but it should be discoverable (on
>>> xenstore if I remember right).
>>
>> Yes, it is just "~/domid". I'll add a function that reads it.
>
> Just a quick question to QEMU folks: is it better to add a global
> variable where we will store own Domain ID or it will be okay to read
> domid from Xenstore every time we need it?
>
> If global variable variant is better, what is proffered place to define
> this variable? system/globals.c ?
Hmm maybe, I see Xen already has some but the comment:
"Global variables that (mostly) should not exist"
I think it to dissuade the file growing more than it should.
I think generally the best pattern to use if a global can't be avoided
is to have a "static global" in the main .c file for the module and then
provide a helper access function for other files to read it. That also
makes re-factoring easier if things like locking need to be added down
the line.
We still do have a few true global variables which need "extern"
declarations in the headers but if we can avoid adding more that would
be good.
Of course ideally this sort of data would be wrapped up in QOM
structures but I can see the argument for the host domain ID.
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 52+ messages in thread