From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Petre Pircalabu <ppircalabu@bitdefender.com>,
xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
Wei Liu <wei.liu2@citrix.com>,
Razvan Cojocaru <rcojocaru@bitdefender.com>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Tim Deegan <tim@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
Julien Grall <julien.grall@arm.com>,
Tamas K Lengyel <tamas@tklengyel.com>,
Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 4/4] vm_event: Add support for multi-page ring buffer
Date: Mon, 17 Sep 2018 15:41:27 +0100 [thread overview]
Message-ID: <f2fe7638-3cc3-377d-bf6e-2273943fe1a1@citrix.com> (raw)
In-Reply-To: <145fcbfb13ae8027df5fefdaa88d537d2d976b7b.1536850239.git.ppircalabu@bitdefender.com>
On 13/09/18 16:02, Petre Pircalabu wrote:
> In high throughput introspection scenarios where lots of monitor
> vm_events are generated, the ring buffer can fill up before the monitor
> application gets a chance to handle all the requests thus blocking
> other vcpus which will have to wait for a slot to become available.
>
> This patch adds support for extending the ring buffer by allocating a
> number of pages from domheap and mapping them to the monitor
> application's domain using the foreignmemory_map_resource interface.
> Unlike the current implementation, the ring buffer pages are not part of
> the introspected DomU, so they will not be reclaimed when the monitor is
> disabled.
>
> Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
What about the slotted format for the synchronous events? While this is
fine for the async bits, I don't think we want to end up changing the
mapping API twice.
Simply increasing the size of the ring puts more pressure on the
> diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
> index 0d23e52..2a9cbf3 100644
> --- a/xen/arch/x86/domain_page.c
> +++ b/xen/arch/x86/domain_page.c
> @@ -331,10 +331,9 @@ void *__map_domain_pages_global(const struct page_info *pg, unsigned int nr)
> {
> mfn_t mfn[nr];
> int i;
> - struct page_info *cur_pg = (struct page_info *)&pg[0];
>
> for (i = 0; i < nr; i++)
> - mfn[i] = page_to_mfn(cur_pg++);
> + mfn[i] = page_to_mfn(pg++);
This hunk looks like it should be in the previous patch? That said...
>
> return map_domain_pages_global(mfn, nr);
> }
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 4793aac..faece3c 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -39,16 +39,66 @@
> #define vm_event_ring_lock(_ved) spin_lock(&(_ved)->ring_lock)
> #define vm_event_ring_unlock(_ved) spin_unlock(&(_ved)->ring_lock)
>
> +#define XEN_VM_EVENT_ALLOC_FROM_DOMHEAP 0xFFFFFFFF
> +
> +static int vm_event_alloc_ring(struct domain *d, struct vm_event_domain *ved)
> +{
> + struct page_info *page;
> + void *va = NULL;
> + int i, rc = -ENOMEM;
> +
> + page = alloc_domheap_pages(d, ved->ring_order, MEMF_no_refcount);
> + if ( !page )
> + return -ENOMEM;
... what is wrong with vzalloc()?
You don't want to be making a ring_order allocation, especially as the
order grows. All you need are some mappings which are virtually
contiguous, not physically contiguous.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-09-17 14:41 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-13 15:01 [PATCH 0/4] Add support for multi-page vm_event ring buffer Petre Pircalabu
2018-09-13 15:01 ` [PATCH 1/4] x86_emulator: Add PHONY uninstall target Petre Pircalabu
2018-09-14 9:04 ` Wei Liu
2018-09-14 9:09 ` Jan Beulich
2018-09-14 11:19 ` Petre Pircalabu
2018-09-13 15:01 ` [PATCH 2/4] tools/libxc: Define VM_EVENT type Petre Pircalabu
2018-09-14 9:14 ` Jan Beulich
2018-09-14 11:11 ` Petre Pircalabu
2018-09-14 11:16 ` Jan Beulich
2018-09-13 15:01 ` [PATCH 3/4] x86: Add map_domain_pages_global Petre Pircalabu
2018-09-18 10:51 ` Jan Beulich
2018-09-24 13:13 ` Julien Grall
2018-09-13 15:02 ` [PATCH 4/4] vm_event: Add support for multi-page ring buffer Petre Pircalabu
2018-09-13 16:42 ` Tamas K Lengyel
2018-09-14 8:10 ` Petre Pircalabu
2018-09-17 14:41 ` Andrew Cooper [this message]
2018-09-24 16:32 ` Petre Pircalabu
2018-09-18 12:58 ` Jan Beulich
2018-09-24 16:54 ` Petre Pircalabu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f2fe7638-3cc3-377d-bf6e-2273943fe1a1@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=julien.grall@arm.com \
--cc=konrad.wilk@oracle.com \
--cc=ppircalabu@bitdefender.com \
--cc=rcojocaru@bitdefender.com \
--cc=sstabellini@kernel.org \
--cc=tamas@tklengyel.com \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).