qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>, qemu-devel@nongnu.org
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Auger Eric <eric.auger@redhat.com>,
	Pankaj Gupta <pankaj.gupta@cloud.ionos.com>,
	teawater <teawaterz@linux.alibaba.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Marek Kedzierski <mkedzier@redhat.com>
Subject: Re: [PATCH v6 01/12] memory: Introduce RamDiscardMgr for RAM memory regions
Date: Tue, 23 Feb 2021 11:50:11 +0100	[thread overview]
Message-ID: <1409acfe-86eb-a4db-b35a-b45f5c046a2e@redhat.com> (raw)
In-Reply-To: <82e6faad-7d45-0f37-eda5-aef42e353972@redhat.com>

On 22.02.21 20:43, David Hildenbrand wrote:
>>>>> The main motivation is to let listener decide how it wants to handle the
>>>>> memory region. For example, for vhost, vdpa, kvm, ... I only want a
>>>>> single region, not separate ones for each and every populated range,
>>>>> punching out discarded ranges. Note that there are cases (i.e.,
>>>>> anonymous memory), where it's even valid for the guest to read discarded
>>>>> memory.
>>>>
>>>> Yes, I agree with that.  You would still have the same
>>>> region-add/region_nop/region_del callbacks for KVM and friends; on top
>>>> of that you would have region_populate/region_discard callbacks for VFIO.
>>>
>>> I think instead of region_populate/region_discard we would want
>>> individual region_add/region_del when populating/discarding for all
>>> MemoryListeners that opt-in somehow (e.g., VFIO, dump-guest-memory,
>>> ...). Similarly, we would want to call log_sync()/log_clear() then only
>>> for these parts.
>>>
>>> But what happens when I populate/discard some memory? I don't want to
>>> trigger an address space transaction (begin()...region_nop()...commit())
>>> - whenever I populate/discard memory (e.g., in 2 MB granularity).
>>> Especially not, if nothing might have changed for most other
>>> MemoryListeners.
>>
>> Right, that was the reason why I was suggesting different callbacks.
>> For the VFIO listener, which doesn't have begin or commit callbacks, I
>> think you could just rename region_add to region_populate, and point
>> both region_del and region_discard to the existing region_del commit.
>>
>> Calling log_sync/log_clear only for populated parts also makes sense.
>> log_sync and log_clear do not have to be within begin/commit, so you can
>> change the semantics to call them more than once.
> 
> So I looked at the simplest of all cases (discard) and I am not convinced yet
> that this is the right approach. I can understand why it looks like this fits
> into the MemoryListener, but I am not sure if gives us any real benefits or
> makes the code any clearer (I'd even say it's the contrary).
> 
> 
> +void memory_region_notify_discard(MemoryRegion *mr, hwaddr offset,
> +                                  hwaddr size)
> +{
> +    hwaddr mr_start, mr_end;
> +    MemoryRegionSection mrs;
> +    MemoryListener *listener;
> +    AddressSpace *as;
> +    FlatView *view;
> +    FlatRange *fr;
> +
> +    QTAILQ_FOREACH(listener, &memory_listeners, link) {
> +        if (!listener->region_discard) {
> +            continue;
> +        }
> +        as = listener->address_space;
> +        view = address_space_get_flatview(as);
> +        FOR_EACH_FLAT_RANGE(fr, view) {
> +            if (fr->mr != mr) {
> +                continue;
> +            }
> +
> +            mrs = section_from_flat_range(fr, view);
> +
> +            mr_start = MAX(mrs.offset_within_region, offset);
> +            mr_end = MIN(offset + size,
> +                          mrs.offset_within_region + int128_get64(mrs.size));
> +            mr_end = MIN(mr_end, offset + size);
> +
> +            if (mr_start >= mr_end) {
> +                continue;
> +            }
> +
> +            mrs.offset_within_address_space += mr_start -
> +                                               mrs.offset_within_region;
> +            mrs.offset_within_region = mr_start;
> +            mrs.size = int128_make64(mr_end - mr_start);
> +            listener->region_discard(listener, &mrs);
> +        }
> +        flatview_unref(view);
> +    }
> +}
> 
> Maybe I am missing something important. This looks highly inefficient.
> 
> 1. Although we know the memory region we have to walk over the whole address
>      space ... over and over again for each potential listener.
> 
> 2. Even without any applicable listeners (=> ! VFIO) we loop over all listeners.
>      There are ways around that but it doesn't make the code nicer IMHO.
> 
> 3. In the future I am planning on sending populate/discard events
>      without the BQL (in my approach, synchronizing internally against
>      register/unregister/populate/discard ...). I don't see an easy way
>      to achieve that here. I think we are required to hold the BQL on any
>      updates.
> 
> memory_region_notify_populate() gets quite ugly when we realize halfway that
> we have to revert what we already did by notifying about already populated
> pieces ...
> 

So, the more I look into it the more I doubt this should go into the 
MemoryListener.

The RamDiscardManager is specific to RAM memory regions - similarly the 
IOMMU notifier is specific to IOMMU regions.

In the near future we will have two "clients" (vfio, 
tpm/dump-guest-memory), whereby only vfio will actually has to register 
for updates at runtime.

I really want to have a dedicated registration/notifier mechanism, for 
reasons already mentioned in my last mail, but also to later reuse that 
mechanism in other context as noted in the cover letter:

"Note: At some point, we might want to let RAMBlock users (esp. vfio 
used for nvme://) consume this interface as well. We'll need RAMBlock 
notifier calls when a RAMBlock is getting mapped/unmapped (via the 
corresponding memory region), so we can properly register a listener 
there as well."


However, I do agree that the notifier should work with 
MemoryRegionSection - this will make the "client" code much easier.

The register()/replay_populate() mechanism should consume a 
MemoryRegionSection to work on, and call the listener via (adjusted) 
MemoryRegionSection.

Maybe I'm even able to simplify/get rid of the discard_all() callback.

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2021-02-23 10:51 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-22 11:56 [PATCH v6 00/12] virtio-mem: vfio support David Hildenbrand
2021-02-22 11:56 ` [PATCH v6 01/12] memory: Introduce RamDiscardMgr for RAM memory regions David Hildenbrand
2021-02-22 13:27   ` Paolo Bonzini
2021-02-22 14:03     ` David Hildenbrand
2021-02-22 14:18       ` Paolo Bonzini
2021-02-22 14:53         ` David Hildenbrand
2021-02-22 17:37           ` Paolo Bonzini
2021-02-22 17:48             ` David Hildenbrand
2021-02-22 19:43             ` David Hildenbrand
2021-02-23 10:50               ` David Hildenbrand [this message]
2021-02-23 15:03                 ` Paolo Bonzini
2021-02-23 15:09                   ` David Hildenbrand
2021-02-22 11:56 ` [PATCH v6 02/12] virtio-mem: Factor out traversing unplugged ranges David Hildenbrand
2021-02-22 11:56 ` [PATCH v6 03/12] virtio-mem: Don't report errors when ram_block_discard_range() fails David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 04/12] virtio-mem: Implement RamDiscardMgr interface David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 05/12] vfio: Support for RamDiscardMgr in the !vIOMMU case David Hildenbrand
2021-02-22 13:20   ` Paolo Bonzini
2021-02-22 14:43     ` David Hildenbrand
2021-02-22 17:29       ` Paolo Bonzini
2021-02-22 17:34         ` David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 06/12] vfio: Query and store the maximum number of possible DMA mappings David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 07/12] vfio: Sanity check maximum number of DMA mappings with RamDiscardMgr David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 08/12] vfio: Support for RamDiscardMgr in the vIOMMU case David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 09/12] softmmu/physmem: Don't use atomic operations in ram_block_discard_(disable|require) David Hildenbrand
2021-02-22 13:14   ` Paolo Bonzini
2021-02-22 13:33     ` David Hildenbrand
2021-02-22 14:02       ` Paolo Bonzini
2021-02-22 15:38         ` David Hildenbrand
2021-02-22 17:32           ` Paolo Bonzini
2021-02-23  9:02             ` David Hildenbrand
2021-02-23 15:02               ` Paolo Bonzini
2021-02-22 11:57 ` [PATCH v6 10/12] softmmu/physmem: Extend ram_block_discard_(require|disable) by two discard types David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 11/12] virtio-mem: Require only coordinated discards David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 12/12] vfio: Disable only uncoordinated discards for VFIO_TYPE1 iommus David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1409acfe-86eb-a4db-b35a-b45f5c046a2e@redhat.com \
    --to=david@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=mkedzier@redhat.com \
    --cc=mst@redhat.com \
    --cc=pankaj.gupta.linux@gmail.com \
    --cc=pankaj.gupta@cloud.ionos.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.weiyang@linux.alibaba.com \
    --cc=teawaterz@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).