From: Lin Ma <lma@suse.de>
To: "Marc-André Lureau" <marcandre.lureau@gmail.com>
Cc: Michael Roth <mdroth@linux.vnet.ibm.com>,
QEMU <qemu-devel@nongnu.org>, Lin Ma <lma@suse.com>
Subject: Re: [PATCH] qga: Correct loop count in qmp_guest_get_vcpus()
Date: Fri, 20 Nov 2020 09:28:36 +0000 [thread overview]
Message-ID: <7594e6cebc51b395c98dbc8714beb7ff@suse.de> (raw)
In-Reply-To: <CAJ+F1C+wUoeT-xTA1Rv6XWBBQfEB_mzOXHBBbEt7OcEQ9+84bQ@mail.gmail.com>
On 2020-11-19 14:46, Marc-André Lureau wrote:
> Hi
>
> On Thu, Nov 19, 2020 at 12:48 PM Lin Ma <lma@suse.com> wrote:
>
>> The guest-get-vcpus returns incorrect vcpu info in case we hotunplug
>> vcpus(not
>> the last one).
>> e.g.:
>> A VM has 4 VCPUs: cpu0 + 3 hotunpluggable online vcpus(cpu1, cpu2 and
>> cpu3).
>> Hotunplug cpu2, Now only cpu0, cpu1 and cpu3 are present & online.
>>
>> ./qmp-shell /tmp/qmp-monitor.sock
>> (QEMU) query-hotpluggable-cpus
>> {"return": [
>> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 3},
>> "vcpus-count": 1,
>> "qom-path": "/machine/peripheral/cpu3", "type": "host-x86_64-cpu"},
>> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 2},
>> "vcpus-count": 1,
>> "qom-path": "/machine/peripheral/cpu2", "type": "host-x86_64-cpu"},
>> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 1},
>> "vcpus-count": 1,
>> "qom-path": "/machine/peripheral/cpu1", "type": "host-x86_64-cpu"},
>> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 0},
>> "vcpus-count": 1,
>> "qom-path": "/machine/unattached/device[0]", "type":
>> "host-x86_64-cpu"}
>> ]}
>>
>> (QEMU) device_del id=cpu2
>> {"return": {}}
>>
>> (QEMU) query-hotpluggable-cpus
>> {"return": [
>> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 3},
>> "vcpus-count": 1,
>> "qom-path": "/machine/peripheral/cpu3", "type": "host-x86_64-cpu"},
>> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 2},
>> "vcpus-count": 1,
>> "type": "host-x86_64-cpu"},
>> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 1},
>> "vcpus-count": 1,
>> "qom-path": "/machine/peripheral/cpu1", "type": "host-x86_64-cpu"},
>> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 0},
>> "vcpus-count": 1,
>> "qom-path": "/machine/unattached/device[0]", "type":
>> "host-x86_64-cpu"}
>> ]}
>>
>> Before:
>> ./qmp-shell -N /tmp/qmp-ga.sock
>> Welcome to the QMP low-level shell!
>> Connected
>> (QEMU) guest-get-vcpus
>> {"return": [
>> {"online": true, "can-offline": false, "logical-id": 0},
>> {"online": true, "can-offline": true, "logical-id": 1}]}
>>
>> After:
>> ./qmp-shell -N /tmp/qmp-ga.sock
>> Welcome to the QMP low-level shell!
>> Connected
>> (QEMU) guest-get-vcpus
>> {"execute":"guest-get-vcpus"}
>> {"return": [
>> {"online": true, "can-offline": false, "logical-id": 0},
>> {"online": true, "can-offline": true, "logical-id": 1},
>> {"online": true, "can-offline": true, "logical-id": 3}]}
>>
>> Signed-off-by: Lin Ma <lma@suse.com>
>> ---
>> qga/commands-posix.c | 8 +++++---
>> 1 file changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/qga/commands-posix.c b/qga/commands-posix.c
>> index 3bffee99d4..accc893373 100644
>> --- a/qga/commands-posix.c
>> +++ b/qga/commands-posix.c
>> @@ -2182,15 +2182,15 @@ GuestLogicalProcessorList
>> *qmp_guest_get_vcpus(Error **errp)
>> {
>> int64_t current;
>> GuestLogicalProcessorList *head, **link;
>> - long sc_max;
>> + long max_loop_count;
>> Error *local_err = NULL;
>>
>> current = 0;
>> head = NULL;
>> link = &head;
>> - sc_max = SYSCONF_EXACT(_SC_NPROCESSORS_CONF, &local_err);
>> + max_loop_count = SYSCONF_EXACT(_SC_NPROCESSORS_CONF, &local_err);
>>
>> - while (local_err == NULL && current < sc_max) {
>> + while (local_err == NULL && current < max_loop_count) {
>> GuestLogicalProcessor *vcpu;
>> GuestLogicalProcessorList *entry;
>> int64_t id = current++;
>> @@ -2206,6 +2206,8 @@ GuestLogicalProcessorList
>> *qmp_guest_get_vcpus(Error
>> **errp)
>> entry->value = vcpu;
>> *link = entry;
>> link = &entry->next;
>> + } else {
>> + max_loop_count += 1;
>>
>
> This looks like a recipe for infinite loop on error.
Emm...It is possible.
>
> Shouldn't we loop over all the /sys/devices/system/cpu/cpu#/ instead?
Originally I'd like to use the function fnmatch to handle pattern cpu#
to
loop over all of the /sys/devices/system/cpu/cpu#/, But it introduces
the
header file fnmatch.h and make things complicated a little.
>
> (possibly parse /sys/devices/system/cpu/present, but I doubt it's
> necessary)
IMO the 'present' won't help.
I'm about to post the V2, I made tiny change in the V2, Please help to
review.
BTW, The local_err will be set in case of error, right? It could avoid
infinite loop.
Thanks a lot,
Lin
next prev parent reply other threads:[~2020-11-20 9:30 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-19 8:45 [PATCH] qga: Correct loop count in qmp_guest_get_vcpus() Lin Ma
2020-11-19 14:46 ` Marc-André Lureau
2020-11-20 9:28 ` Lin Ma [this message]
2020-11-20 9:48 ` Marc-André Lureau
-- strict thread matches above, loose matches on Subject: below --
2020-11-19 6:08 Lin Ma
2020-11-19 6:12 ` no-reply
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7594e6cebc51b395c98dbc8714beb7ff@suse.de \
--to=lma@suse.de \
--cc=lma@suse.com \
--cc=marcandre.lureau@gmail.com \
--cc=mdroth@linux.vnet.ibm.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).