public inbox for qemu-devel@nongnu.org
 help / color / mirror / Atom feed
* [GSoC 2026] vhost-user memory isolation proposal feedback request
@ 2026-03-09 11:17 Han Zhang
  2026-03-09 16:40 ` Hanna Czenczek
  2026-03-24 17:26 ` Hanna Czenczek
  0 siblings, 2 replies; 5+ messages in thread
From: Han Zhang @ 2026-03-09 11:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: stefanha, tfanelli, hreitz

Hello,

My name is Han. I previously implemented a virtio-based communication
mechanism between confidential virtual machines, and based on that
experience I would like to apply for the QEMU GSoC 2026 project
"vhost-user memory isolation". Before finalizing my proposal, I would
like to check whether my understanding of the project direction is
correct.

My current understanding is:
without changing the existing vhost-user protocol, add a
memory-isolation mode for vhost-user devices so the backend no longer
directly accesses guest RAM. Instead, QEMU intercepts virtqueue
requests, copies data between guest RAM and isolated buffers, and
forwards notifications. The backend only sees QEMU-managed shadow
virtqueues and descriptors pointing to isolated buffers.

After reading the relevant code paths around vhost-user-blk and SVQ,
my current understanding of the required work is roughly:
1. Extend the generic SVQ path for the vhost-user case, including
adding a used_handler so completion handling can perform copy-back and
cleanup before returning requests to the guest virtqueue.
2. Move the SVQ vring memory to memfd-backed shared regions and
register them with the backend through add-mem-reg/rem-mem-reg, so the
userspace backend can access the shadow vring.
3. Allocate bounce or isolated buffers at the SVQ callback point, copy
data from the guest virtqueue into those buffers, forward rewritten
descriptors to the backend, and copy data back on completion.

I am mainly trying to validate whether this is the right architectural
direction, especially the split between generic reusable vhost-user
SVQ code and device-specific handling such as the vhost-user-blk
bounce-buffer path.

I would appreciate feedback on the following:
1. Is this interpretation of the core goal correct, especially "QEMU
performs data copy, backend only sees isolated memory + SVQ"?
2. For isolated buffers, is qemu_memfd_alloc + add-mem-reg the
preferred direction, or is there a better approach?
3. For code organization, what split is preferred between generic
vhost-user code and device-specific code (for example vhost-user-blk)?

This is my first time participating in an open source project, so I
would greatly appreciate any correction or guidance.

Thank you very much for your time.

Best regards,
Han


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [GSoC 2026] vhost-user memory isolation proposal feedback request
  2026-03-09 11:17 [GSoC 2026] vhost-user memory isolation proposal feedback request Han Zhang
@ 2026-03-09 16:40 ` Hanna Czenczek
  2026-03-24 17:26 ` Hanna Czenczek
  1 sibling, 0 replies; 5+ messages in thread
From: Hanna Czenczek @ 2026-03-09 16:40 UTC (permalink / raw)
  To: Han Zhang, qemu-devel; +Cc: stefanha, tfanelli

On 09.03.26 12:17, Han Zhang wrote:
> Hello,
>
> My name is Han. I previously implemented a virtio-based communication
> mechanism between confidential virtual machines, and based on that
> experience I would like to apply for the QEMU GSoC 2026 project
> "vhost-user memory isolation". Before finalizing my proposal, I would
> like to check whether my understanding of the project direction is
> correct.

Hello Han!

Thank you for your interest in this project, good to hear you already 
have experience with virtio!

> My current understanding is:
> without changing the existing vhost-user protocol, add a
> memory-isolation mode for vhost-user devices so the backend no longer
> directly accesses guest RAM. Instead, QEMU intercepts virtqueue
> requests, copies data between guest RAM and isolated buffers, and
> forwards notifications. The backend only sees QEMU-managed shadow
> virtqueues and descriptors pointing to isolated buffers.

That is correct.

> After reading the relevant code paths around vhost-user-blk and SVQ,
> my current understanding of the required work is roughly:
> 1. Extend the generic SVQ path for the vhost-user case, including
> adding a used_handler so completion handling can perform copy-back and
> cleanup before returning requests to the guest virtqueue.

You mean used_handler as a counterpart to avail_handler?  That makes 
sense indeed.

> 2. Move the SVQ vring memory to memfd-backed shared regions and
> register them with the backend through add-mem-reg/rem-mem-reg, so the
> userspace backend can access the shadow vring.

That must happen in some capacity, although I would have assumed that 
there is already a mechanism for this, for the vring memory itself.

> 3. Allocate bounce or isolated buffers at the SVQ callback point, copy
> data from the guest virtqueue into those buffers, forward rewritten
> descriptors to the backend, and copy data back on completion.

Right.  And these too would need to be shared with the back-end, too.  
Ideally, they are cached, of course, to reduce the number of buffer 
add/remove functions that need to be run.

> I am mainly trying to validate whether this is the right architectural
> direction, especially the split between generic reusable vhost-user
> SVQ code and device-specific handling such as the vhost-user-blk
> bounce-buffer path.
>
> I would appreciate feedback on the following:
> 1. Is this interpretation of the core goal correct, especially "QEMU
> performs data copy, backend only sees isolated memory + SVQ"?

Yes, it is.

> 2. For isolated buffers, is qemu_memfd_alloc + add-mem-reg the
> preferred direction, or is there a better approach?

I’ll defer to Tyler and Stefan on this, but in general, I would say if 
it works, it works.  It does sound good to me, fwiw.

> 3. For code organization, what split is preferred between generic
> vhost-user code and device-specific code (for example vhost-user-blk)?

Ideally, it is completely generic, nothing in the device-specific code.

> This is my first time participating in an open source project, so I
> would greatly appreciate any correction or guidance.

Perfect for GSoC! :)

Hanna

> Thank you very much for your time.
>
> Best regards,
> Han
>



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [GSoC 2026] vhost-user memory isolation proposal feedback request
  2026-03-09 11:17 [GSoC 2026] vhost-user memory isolation proposal feedback request Han Zhang
  2026-03-09 16:40 ` Hanna Czenczek
@ 2026-03-24 17:26 ` Hanna Czenczek
  2026-03-25  2:11   ` Han Zhang
  1 sibling, 1 reply; 5+ messages in thread
From: Hanna Czenczek @ 2026-03-24 17:26 UTC (permalink / raw)
  To: Han Zhang, qemu-devel; +Cc: stefanha, tfanelli

Hi Han,

I’ve taken a look at your proposal, thanks for submitting it!

It clearly shows you looked into the code and the vhost-user spec and 
made yourself familiar with both, that’s very good. I find it a solid 
concept. I also like your clear separation in fundamental work and later 
optimization.

I have one question: In the optimization section, you suggest using a 
memory pool. I fully agree that’s a good optimization, but it makes me 
wonder what model you have in mind for the original implementation. Do 
you plan on allocating a new buffer for each request? (To be clear: I’m 
not saying that would be a bad idea for the initial proof-of-concept, 
I’m just asking for clarification.)

To help make these detail questions clearer, could you lay out how an 
example request from the guest to the vhost-user back-end would go 
through all the layers, and the response back?


Hanna



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [GSoC 2026] vhost-user memory isolation proposal feedback request
  2026-03-24 17:26 ` Hanna Czenczek
@ 2026-03-25  2:11   ` Han Zhang
  2026-03-25 16:27     ` Hanna Czenczek
  0 siblings, 1 reply; 5+ messages in thread
From: Han Zhang @ 2026-03-25  2:11 UTC (permalink / raw)
  To: Hanna Czenczek; +Cc: qemu-devel, stefanha, tfanelli

On Wed, Mar 25, 2026 at 1:26 AM Hanna Czenczek <hreitz@redhat.com> wrote:
>
> Hi Han,
>
> I’ve taken a look at your proposal, thanks for submitting it!
>
> It clearly shows you looked into the code and the vhost-user spec and
> made yourself familiar with both, that’s very good. I find it a solid
> concept. I also like your clear separation in fundamental work and later
> optimization.
>
> I have one question: In the optimization section, you suggest using a
> memory pool. I fully agree that’s a good optimization, but it makes me
> wonder what model you have in mind for the original implementation. Do
> you plan on allocating a new buffer for each request? (To be clear: I’m
> not saying that would be a bad idea for the initial proof-of-concept,
> I’m just asking for clarification.)
>

Hi Hanna,

Thank you very much for reviewing my proposal carefully, and for
raising such a precise question — it is extremely helpful.

To answer directly: yes, in the initial implementation phase I plan
to allocate bounce buffers per request (more precisely, per
VirtQueueElement) and release them once the request is completed.
This is a correctness-first approach.

> To help make these detail questions clearer, could you lay out how an
> example request from the guest to the vhost-user back-end would go
> through all the layers, and the response back?

In the initial implementation, the request path from guest ->
vhost-user backend -> guest is as follows:

1. Guest submits the descriptor and kicks the queue.
2. QEMU intercepts the request in the SVQ avail handler.
3. QEMU allocates isolated bounce memory for the request and
registers it to the backend via ADD_MEM_REG.
4. For the guest->device buffer, QEMU copies data from guest RAM
to bounce.
5. QEMU constructs shadow descriptors pointing into bounce memory,
injects them into the shadow vring, and forwards the request
to the backend.
6. The backend handles the request: it reads input segments and
writes responses to output segments as needed, then updates
the used ring.
7. QEMU processes completion in the SVQ used handler.
8. For the device->guest buffer, QEMU copies data back from bounce
to guest RAM and completes the request for the guest.
9. QEMU performs REM_MEM_REG and frees the requested resources.

Once this path is functionally correct and covered by tests,
I will prioritize evaluating and rolling out a memory pool approach:
pre-allocate and pre-register a reusable set of bounce memory
at device start, then lease/return during requests.
The depth will be decided based on
implementation complexity and measured benefit.

If you think it makes sense, I can also add this "request lifecycle example"
to the proposal text for more clarity.

Thank you!
Han


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [GSoC 2026] vhost-user memory isolation proposal feedback request
  2026-03-25  2:11   ` Han Zhang
@ 2026-03-25 16:27     ` Hanna Czenczek
  0 siblings, 0 replies; 5+ messages in thread
From: Hanna Czenczek @ 2026-03-25 16:27 UTC (permalink / raw)
  To: Han Zhang; +Cc: qemu-devel, stefanha, tfanelli

On 25.03.26 03:11, Han Zhang wrote:
> On Wed, Mar 25, 2026 at 1:26 AM Hanna Czenczek <hreitz@redhat.com> wrote:
>> Hi Han,
>>
>> I’ve taken a look at your proposal, thanks for submitting it!
>>
>> It clearly shows you looked into the code and the vhost-user spec and
>> made yourself familiar with both, that’s very good. I find it a solid
>> concept. I also like your clear separation in fundamental work and later
>> optimization.
>>
>> I have one question: In the optimization section, you suggest using a
>> memory pool. I fully agree that’s a good optimization, but it makes me
>> wonder what model you have in mind for the original implementation. Do
>> you plan on allocating a new buffer for each request? (To be clear: I’m
>> not saying that would be a bad idea for the initial proof-of-concept,
>> I’m just asking for clarification.)
>>
> Hi Hanna,
>
> Thank you very much for reviewing my proposal carefully, and for
> raising such a precise question — it is extremely helpful.
>
> To answer directly: yes, in the initial implementation phase I plan
> to allocate bounce buffers per request (more precisely, per
> VirtQueueElement) and release them once the request is completed.
> This is a correctness-first approach.

Agreed, sounds good!

>> To help make these detail questions clearer, could you lay out how an
>> example request from the guest to the vhost-user back-end would go
>> through all the layers, and the response back?
> In the initial implementation, the request path from guest ->
> vhost-user backend -> guest is as follows:
>
> 1. Guest submits the descriptor and kicks the queue.
> 2. QEMU intercepts the request in the SVQ avail handler.
> 3. QEMU allocates isolated bounce memory for the request and
> registers it to the backend via ADD_MEM_REG.
> 4. For the guest->device buffer, QEMU copies data from guest RAM
> to bounce.
> 5. QEMU constructs shadow descriptors pointing into bounce memory,
> injects them into the shadow vring, and forwards the request
> to the backend.
> 6. The backend handles the request: it reads input segments and
> writes responses to output segments as needed, then updates
> the used ring.
> 7. QEMU processes completion in the SVQ used handler.
> 8. For the device->guest buffer, QEMU copies data back from bounce
> to guest RAM and completes the request for the guest.
> 9. QEMU performs REM_MEM_REG and frees the requested resources.

Yes, that sounds like the right path.

> Once this path is functionally correct and covered by tests,
> I will prioritize evaluating and rolling out a memory pool approach:
> pre-allocate and pre-register a reusable set of bounce memory
> at device start, then lease/return during requests.
> The depth will be decided based on
> implementation complexity and measured benefit.

Agreed.

> If you think it makes sense, I can also add this "request lifecycle example"
> to the proposal text for more clarity.

I think it would be nice as an illustration, yes.


Thanks!

Hanna



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-03-25 16:28 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-09 11:17 [GSoC 2026] vhost-user memory isolation proposal feedback request Han Zhang
2026-03-09 16:40 ` Hanna Czenczek
2026-03-24 17:26 ` Hanna Czenczek
2026-03-25  2:11   ` Han Zhang
2026-03-25 16:27     ` Hanna Czenczek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox