qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>,
	yuanminghao <yuanmh12@chinatelecom.cn>,
	qemu-devel@nongnu.org, Stefano Garzarella <sgarzare@redhat.com>
Subject: Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
Date: Tue, 3 Jun 2025 11:15:18 +0200	[thread overview]
Message-ID: <89639b23-6061-4c18-96c9-1f2d3bcd773c@redhat.com> (raw)
In-Reply-To: <20250530073603-mutt-send-email-mst@kernel.org>

On 30.05.25 13:36, Michael S. Tsirkin wrote:
> On Fri, May 30, 2025 at 01:28:58PM +0200, David Hildenbrand wrote:
>> On 30.05.25 13:18, Michael S. Tsirkin wrote:
>>> On Wed, May 14, 2025 at 11:26:05AM +0200, David Hildenbrand wrote:
>>>> On 14.05.25 11:12, Igor Mammedov wrote:
>>>>> On Tue, 13 May 2025 15:12:11 +0200
>>>>> David Hildenbrand <david@redhat.com> wrote:
>>>>>
>>>>>> On 13.05.25 14:13, Igor Mammedov wrote:
>>>>>>> On Mon,  3 Mar 2025 13:02:17 -0500
>>>>>>> yuanminghao <yuanmh12@chinatelecom.cn> wrote:
>>>>>>>>>> Global used_memslots or used_shared_memslots is updated to 0 unexpectly
>>>>>>>>>
>>>>>>>>> it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
>>>>>>>>> It's likely a bug somewhere else.
>>>>>>>
>>>>>>> I haven't touched this code for a long time, but I'd say if we consider multiple
>>>>>>> devices, we shouldn't do following:
>>>>>>>
>>>>>>> static void vhost_commit(MemoryListener *listener)
>>>>>>>         ...
>>>>>>>         if (dev->vhost_ops->vhost_backend_no_private_memslots &&
>>>>>>>             dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
>>>>>>>             used_shared_memslots = dev->mem->nregions;
>>>>>>>         } else {
>>>>>>>             used_memslots = dev->mem->nregions;
>>>>>>>         }
>>>>>>>
>>>>>>> where value dev->mem->nregions gets is well hidden/obscured
>>>>>>> and hard to trace where tail ends => fragile.
>>>>>>>
>>>>>>> CCing David (accidental victim) who rewrote this part the last time,
>>>>>>> perhaps he can suggest a better way to fix the issue.
>>>>>>
>>>>>> I think the original idea is that all devices (of on type: private vs.
>>>>>> non-private memslots) have the same number of memslots.
>>>>>>
>>>>>> This avoids having to loop over all devices to figure out the number of
>>>>>> memslots.
>>>>>>
>>>>>> ... but in vhost_get_free_memslots() we already loop over all devices.
>>>>>>
>>>>>> The check in vhost_dev_init() needs to be taken care of.
>>>>>>
>>>>>> So maybe we can get rid of both variables completely?
>>>>>
>>>>> looks reasonable to me,  (instead of current state which is
>>>>> juggling with  dev->mem->nregions that can become 0 on unplug
>>>>> as it was reported).
>>>>>
>>>>> David,
>>>>> do you have time to fix it?
>>>>
>>>> I can try, but I was wondering/hoping whether Yuanminghao could take a look
>>>> at that? I can provide guidance if necessary.
>>>
>>>
>>> Guys?
>>
>> Is the original author not interested in fixing the problem?
> 
> Given the silence I'd guess no.

SMH, why then even send patches in the first place ...

Will try finding time to look into this ...

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2025-06-03  9:15 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-03 18:02 [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev yuanminghao
2025-04-02 16:25 ` Michael S. Tsirkin
2025-05-09 16:39 ` Michael S. Tsirkin
2025-05-13 12:13 ` Igor Mammedov
2025-05-13 13:12   ` David Hildenbrand
2025-05-14  9:12     ` Igor Mammedov
2025-05-14  9:26       ` David Hildenbrand
2025-05-30 11:18         ` Michael S. Tsirkin
2025-05-30 11:28           ` David Hildenbrand
2025-05-30 11:36             ` Michael S. Tsirkin
2025-06-03  9:15               ` David Hildenbrand [this message]
2025-06-10 16:14                 ` Michael S. Tsirkin
  -- strict thread matches above, loose matches on Subject: below --
2024-11-21  6:07 yuanminghao
2025-02-25 13:42 ` Igor Mammedov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=89639b23-6061-4c18-96c9-1f2d3bcd773c@redhat.com \
    --to=david@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=sgarzare@redhat.com \
    --cc=yuanmh12@chinatelecom.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).