From: Tamas K Lengyel <tamas@tklengyel.com>
To: Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>
Cc: "Stefano Stabellini" <sstabellini@kernel.org>,
"Wei Liu" <wei.liu2@citrix.com>,
"Razvan Cojocaru" <rcojocaru@bitdefender.com>,
"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>,
"George Dunlap" <George.Dunlap@eu.citrix.com>,
"Andrew Cooper" <andrew.cooper3@citrix.com>,
"Ian Jackson" <ian.jackson@eu.citrix.com>,
"Tim Deegan" <tim@xen.org>, "Julien Grall" <julien.grall@arm.com>,
"Jan Beulich" <jbeulich@suse.com>,
Xen-devel <xen-devel@lists.xenproject.org>,
"Roger Pau Monné" <roger.pau@citrix.com>
Subject: Re: [PATCH RFC 0/6] Slotted channels for sync vm_events
Date: Thu, 20 Dec 2018 07:08:39 -0700 [thread overview]
Message-ID: <CABfawhmbX4fcN_MP5MkeJywwu_ZeNyc8VfJ4CG3_jKoWdXXLkQ@mail.gmail.com> (raw)
In-Reply-To: <fa2d73ba16b4dacd64f900c441272296234c0c00.camel@bitdefender.com>
On Thu, Dec 20, 2018 at 3:48 AM Petre Ovidiu PIRCALABU
<ppircalabu@bitdefender.com> wrote:
>
> On Wed, 2018-12-19 at 15:33 -0700, Tamas K Lengyel wrote:
> > On Wed, Dec 19, 2018 at 11:52 AM Petre Pircalabu
> > <ppircalabu@bitdefender.com> wrote:
> > >
> > > This patchset is a rework of the "multi-page ring buffer" for
> > > vm_events
> > > patch based on Andrew Cooper's comments.
> > > For synchronous vm_events the ring waitqueue logic was unnecessary
> > > as the
> > > vcpu sending the request was blocked until a response was received.
> > > To simplify the request/response mechanism, an array of slotted
> > > channels
> > > was created, one per vcpu. Each vcpu puts the request in the
> > > corresponding slot and blocks until the response is received.
> > >
> > > I'm sending this patch as a RFC because, while I'm still working on
> > > way to
> > > measure the overall performance improvement, your feedback would be
> > > a great
> > > assistance.
> >
> > Generally speaking this approach is OK, but I'm concerned that we
> > will
> > eventually run into the same problem that brought up the idea of
> > using
> > multi-page rings: vm_event structures that are larger then a page.
> > Right now this series adds a ring for each vCPU, which does mitigate
> > some of the bottleneck, but it does not really address the root
> > cause.
> > It also adds significant complexity as the userspace side now has to
> > map in multiple rings, each with its own event channel and polling
> > requirements.
> >
> > Tamas
> The memory for the vm_event "rings" (actually for synchronous vm_event
> just an array of vm_event_slot structures ( state + vm_event_request /
> vm_event_response) is allocated directly from domheap and spans over as
> many pages as necessary.
Ah, OK, I missed that. In that case that is fine :)
> Regarding the userspace complexity, unfortunately I haven't had a
> better idea (but I'm open to suggestions).
> In order to have a lock-free mechanism to access the vm_event data,
> each vcpu should access only its own slot (referenced by vcpu_id).
> I have used the "one event channel per slot + one for the async ring"
> approach, because, to my understanding, the only additional information
> an event channel can carry is the vcpu on which is triggered.
Right, alternative would be to have a single event channel and then
the userspace has to check each slot manually to see which was
updated. Not really ideal either, so I would stick with the current
approach with having multiple event channels.
Thanks!
Tamas
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-12-20 14:09 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-19 18:52 [PATCH RFC 0/6] Slotted channels for sync vm_events Petre Pircalabu
2018-12-19 18:52 ` [RFC PATCH 1/6] tools/libxc: Consistent usage of xc_vm_event_* interface Petre Pircalabu
2018-12-19 18:52 ` [RFC PATCH 2/6] tools/libxc: Define VM_EVENT type Petre Pircalabu
2018-12-19 22:13 ` Tamas K Lengyel
2019-01-02 11:11 ` Wei Liu
2019-01-08 15:01 ` Petre Ovidiu PIRCALABU
2019-01-25 14:16 ` Wei Liu
2019-01-08 16:25 ` Jan Beulich
2019-02-11 12:30 ` Petre Ovidiu PIRCALABU
2018-12-19 18:52 ` [RFC PATCH 3/6] vm_event: Refactor vm_event_domain implementation Petre Pircalabu
2018-12-19 22:26 ` Tamas K Lengyel
2018-12-20 12:39 ` Petre Ovidiu PIRCALABU
2018-12-19 18:52 ` [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests Petre Pircalabu
2018-12-20 12:05 ` Paul Durrant
2018-12-20 14:25 ` Petre Ovidiu PIRCALABU
2018-12-20 14:28 ` Paul Durrant
2018-12-20 15:03 ` Jan Beulich
2018-12-24 10:37 ` Julien Grall
2019-01-09 16:21 ` Razvan Cojocaru
2019-01-10 9:58 ` Paul Durrant
2019-01-10 15:28 ` Razvan Cojocaru
2019-01-08 14:49 ` Petre Ovidiu PIRCALABU
2019-01-08 15:08 ` Paul Durrant
2019-01-08 16:13 ` Petre Ovidiu PIRCALABU
2019-01-08 16:25 ` Paul Durrant
2019-01-10 15:30 ` Petre Ovidiu PIRCALABU
2019-01-10 15:46 ` Paul Durrant
2019-04-02 14:47 ` Andrew Cooper
2018-12-19 18:52 ` [RFC PATCH 5/6] xen-access: add support for slotted channel vm_events Petre Pircalabu
2018-12-19 18:52 ` [RFC PATCH 6/6] xc_version: add vm_event interface version Petre Pircalabu
2019-01-08 16:27 ` Jan Beulich
2019-01-08 16:37 ` Razvan Cojocaru
2019-01-08 16:47 ` Jan Beulich
2019-01-09 9:11 ` Razvan Cojocaru
2019-02-12 16:57 ` Petre Ovidiu PIRCALABU
2019-02-12 17:14 ` Jan Beulich
2019-02-12 18:13 ` Tamas K Lengyel
2019-02-12 18:19 ` Razvan Cojocaru
2019-02-12 18:25 ` Tamas K Lengyel
2018-12-19 22:33 ` [PATCH RFC 0/6] Slotted channels for sync vm_events Tamas K Lengyel
2018-12-19 23:30 ` Andrew Cooper
2018-12-20 10:48 ` Petre Ovidiu PIRCALABU
2018-12-20 14:08 ` Tamas K Lengyel [this message]
2019-02-06 14:26 ` Petre Ovidiu PIRCALABU
2019-02-07 11:46 ` George Dunlap
2019-02-07 16:06 ` Petre Ovidiu PIRCALABU
2019-02-12 17:01 ` Tamas K Lengyel
2019-02-19 11:48 ` Razvan Cojocaru
2019-03-04 16:01 ` George Dunlap
2019-03-04 16:20 ` Tamas K Lengyel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CABfawhmbX4fcN_MP5MkeJywwu_ZeNyc8VfJ4CG3_jKoWdXXLkQ@mail.gmail.com \
--to=tamas@tklengyel.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=julien.grall@arm.com \
--cc=konrad.wilk@oracle.com \
--cc=ppircalabu@bitdefender.com \
--cc=rcojocaru@bitdefender.com \
--cc=roger.pau@citrix.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).