qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: linhafieng <haifeng.lin@huawei.com>
To: qemu-devel@nongnu.org
Cc: damarion@cisco.com,
	"mst@redhat.com >> Michael S. Tsirkin" <mst@redhat.com>,
	jerry.lilijun@huawei.com, n.nikolaev@virtualopensystems.com,
	pbonzini@redhat.com, tech@virtualopensystems.com
Subject: [Qemu-devel] Fwd: Re: the userspace process vapp mmap filed // [PULL 13/37] vhost-user: fix regions provied with VHOST_USER_SET_MEM_TABLE message
Date: Tue, 9 Sep 2014 20:28:37 +0800	[thread overview]
Message-ID: <540EF275.10201@huawei.com> (raw)
In-Reply-To: <540EE844.8080507@huawei.com>




-------- Forwarded Message --------
Subject: Re: the userspace process vapp mmap filed //[Qemu-devel] [PULL 13/37] vhost-user: fix regions provied with VHOST_USER_SET_MEM_TABLE message
Date: Tue, 09 Sep 2014 19:45:08 +0800
From: linhafieng <haifeng.lin@huawei.com>
To: Michael S. Tsirkin <mst@redhat.com>
CC: n.nikolaev@virtualopensystems.com, jerry.lilijun@huawei.com, qemu-devel@nongnu.org, pbonzini@redhat.com, damarion@cisco.com, tech@virtualopensystems.com

On 2014/9/3 15:08, Michael S. Tsirkin wrote:
> On Wed, Sep 03, 2014 at 02:26:03PM +0800, linhafieng wrote:
>> I run the user process vapp to test the  VHOST_USER_SET_MEM_TABLE message found that the user sapce failed to mmap.
> 
> Why off-list?
> pls copy qemu mailing list and pbonzini@redhat.com
> 
> 


I wrote a patch for the vapp to test the patch of broken mem regions.The vapp can receive data from VM but there is a mmap failed error.

i have some qeusions about the patch and vhost-user:
1.can i mmap all the fd of the mem regions? why some region failed?Have any impact on it?
2.the vapp why not update with the path of broken mem regions?
3.is the test program of vhost user test vring mem more meaningful?
4.the port of switch how to find the vhost-user device?by the socket path?
5.should the process of vhost-user manage all vhost-user backend socket fd? or any better advise?


my patch is for vapp is :

diff -uNr vapp/vhost_server.c vapp-for-broken-mem-region//vhost_server.c
--- vapp/vhost_server.c 2014-08-30 09:39:20.000000000 +0000
+++ vapp-for-broken-mem-region//vhost_server.c  2014-09-09 11:36:50.000000000 +0000
@@ -147,18 +147,22 @@

     for (idx = 0; idx < msg->msg.memory.nregions; idx++) {
         if (msg->fds[idx] > 0) {
+            size_t size;
+            uint64_t *guest_mem;
             VhostServerMemoryRegion *region = &vhost_server->memory.regions[idx];

             region->guest_phys_addr = msg->msg.memory.regions[idx].guest_phys_addr;
             region->memory_size = msg->msg.memory.regions[idx].memory_size;
             region->userspace_addr = msg->msg.memory.regions[idx].userspace_addr;
-
+            region->mmap_offset = msg->msg.memory.regions[idx].mmap_offset;
+
             assert(idx < msg->fd_num);
             assert(msg->fds[idx] > 0);

-            region->mmap_addr =
-                    (uintptr_t) init_shm_from_fd(msg->fds[idx], region->memory_size);
-
+            size = region->memory_size + region->mmap_offset;
+            guest_mem = init_shm_from_fd(msg->fds[idx], size);
+            guest_mem += (region->mmap_offset / sizeof(*guest_mem));
+            region->mmap_addr = (uint64_t)guest_mem;
             vhost_server->memory.nregions++;
         }
     }
diff -uNr vapp/vhost_server.h vapp-for-broken-mem-region//vhost_server.h
--- vapp/vhost_server.h 2014-08-30 09:39:20.000000000 +0000
+++ vapp-for-broken-mem-region//vhost_server.h  2014-09-05 01:41:27.000000000 +0000
@@ -13,7 +13,9 @@
     uint64_t guest_phys_addr;
     uint64_t memory_size;
     uint64_t userspace_addr;
+       uint64_t mmap_offset;
     uint64_t mmap_addr;
+
 } VhostServerMemoryRegion;

 typedef struct VhostServerMemory {
diff -uNr vapp/vhost_user.h vapp-for-broken-mem-region//vhost_user.h
--- vapp/vhost_user.h   2014-08-30 09:39:20.000000000 +0000
+++ vapp-for-broken-mem-region//vhost_user.h    2014-09-05 01:40:20.000000000 +0000
@@ -13,6 +13,7 @@
     uint64_t guest_phys_addr;
     uint64_t memory_size;
     uint64_t userspace_addr;
+       uint64_t mmap_offset;
 } VhostUserMemoryRegion;

 typedef struct VhostUserMemory {


the result of the vapp with my patch :
................................................................................
Processing message: VHOST_USER_SET_OWNER
_set_owner
Cmd: VHOST_USER_GET_FEATURES (0x1)
Flags: 0x1
u64: 0x0
................................................................................
Processing message: VHOST_USER_GET_FEATURES
_get_features
Cmd: VHOST_USER_SET_VRING_CALL (0xd)
Flags: 0x1
u64: 0x0
................................................................................
Processing message: VHOST_USER_SET_VRING_CALL
_set_vring_call
Got callfd 0x5
Cmd: VHOST_USER_SET_VRING_CALL (0xd)
Flags: 0x1
u64: 0x1
................................................................................
Processing message: VHOST_USER_SET_VRING_CALL
_set_vring_call
Got callfd 0x6
Cmd: VHOST_USER_SET_FEATURES (0x2)
Flags: 0x1
u64: 0x0
................................................................................
Processing message: VHOST_USER_SET_FEATURES
_set_features
Cmd: VHOST_USER_SET_MEM_TABLE (0x5)
Flags: 0x1
nregions: 2
region:
        gpa = 0x0
        size = 655360
        ua = 0x7f76c0000000 [0]
region:
        gpa = 0xC0000
        size = 2146697216
        ua = 0x7f76c00c0000 [1]
................................................................................
Processing message: VHOST_USER_SET_MEM_TABLE
_set_mem_table
mmap: Invalid argument //@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ region 0 mmap failed!
Got memory.nregions 2
Cmd: VHOST_USER_SET_VRING_NUM (0x8)
Flags: 0x1
state: 0 256
................................................................................
Processing message: VHOST_USER_SET_VRING_NUM
_set_vring_num
Cmd: VHOST_USER_SET_VRING_BASE (0xa)
Flags: 0x1
state: 0 0
................................................................................
Processing message: VHOST_USER_SET_VRING_BASE
_set_vring_base
Cmd: VHOST_USER_SET_VRING_ADDR (0x9)
Flags: 0x1
addr:
        idx = 0
        flags = 0x0
        dua = 0x7f76f7f54000
        uua = 0x7f76f7f56000
        aua = 0x7f76f7f55000
        lga = 0x37f56000
................................................................................
Processing message: VHOST_USER_SET_VRING_ADDR
_set_vring_addr
Cmd: VHOST_USER_SET_VRING_KICK (0xc)
Flags: 0x1
u64: 0x0
................................................................................
Processing message: VHOST_USER_SET_VRING_KICK
_set_vring_kick
Got kickfd 0x9
Cmd: VHOST_USER_SET_VRING_NUM (0x8)
Flags: 0x1
state: 1 256
................................................................................
Processing message: VHOST_USER_SET_VRING_NUM
_set_vring_num
Cmd: VHOST_USER_SET_VRING_BASE (0xa)
Flags: 0x1
state: 1 0
................................................................................
Processing message: VHOST_USER_SET_VRING_BASE
_set_vring_base
Cmd: VHOST_USER_SET_VRING_ADDR (0x9)
Flags: 0x1
addr:
        idx = 1
        flags = 0x0
        dua = 0x7f7739834000
        uua = 0x7f7739836000
        aua = 0x7f7739835000
        lga = 0x79836000
................................................................................
Processing message: VHOST_USER_SET_VRING_ADDR
_set_vring_addr
Cmd: VHOST_USER_SET_VRING_KICK (0xc)
Flags: 0x1
u64: 0x1
................................................................................
Processing message: VHOST_USER_SET_VRING_KICK
_set_vring_kick
Got kickfd 0xa
Listening for kicks on 0xa
Cmd: VHOST_USER_SET_VRING_CALL (0xd)
Flags: 0x1
u64: 0x0
................................................................................
Processing message: VHOST_USER_SET_VRING_CALL
_set_vring_call
Got callfd 0xb
Cmd: VHOST_USER_SET_VRING_CALL (0xd)
Flags: 0x1
u64: 0x1
................................................................................
Processing message: VHOST_USER_SET_VRING_CALL
_set_vring_call
Got callfd 0xc
chunks: 10 90
................................................................................
33 33 00 00 00 16 52 54 00 12 34 56 86 dd 60 00
00 00 00 24 00 01 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 ff 02 00 00 00 00 00 00 00 00
00 00 00 00 00 16 3a 00 05 02 00 00 01 00 8f 00
3b 22 00 00 00 01 04 00 00 00 ff 02 00 00 00 00
00 00 00 00 00 01 ff 12 34 56
chunks: 10 78
................................................................................
33 33 ff 12 34 56 52 54 00 12 34 56 86 dd 60 00
00 00 00 18 3a ff 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 ff 02 00 00 00 00 00 00 00 00
00 01 ff 12 34 56 87 00 c4 02 00 00 00 00 fe 80
00 00 00 00 00 00 50 54 00 ff fe 12 34 56
chunks: 10 70
................................................................................
33 33 00 00 00 02 52 54 00 12 34 56 86 dd 60 00
00 00 00 10 3a ff fe 80 00 00 00 00 00 00 50 54
00 ff fe 12 34 56 ff 02 00 00 00 00 00 00 00 00
00 00 00 00 00 02 85 00 71 b5 00 00 00 00 01 01
52 54 00 12 34 56
chunks: 10 90
................................................................................

  reply	other threads:[~2014-09-09 12:29 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <5406B47B.7070006@huawei.com>
     [not found] ` <20140903070831.GB5449@redhat.com>
2014-09-09 11:45   ` [Qemu-devel] the userspace process vapp mmap filed // [PULL 13/37] vhost-user: fix regions provied with VHOST_USER_SET_MEM_TABLE message linhafieng
2014-09-09 12:28     ` linhafieng [this message]
2014-09-09 17:54       ` Nikolay Nikolaev
2014-09-09 20:40         ` Michael S. Tsirkin
2014-09-10  3:00         ` Linhaifeng
2014-09-10 11:01           ` Nikolay Nikolaev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=540EF275.10201@huawei.com \
    --to=haifeng.lin@huawei.com \
    --cc=damarion@cisco.com \
    --cc=jerry.lilijun@huawei.com \
    --cc=mst@redhat.com \
    --cc=n.nikolaev@virtualopensystems.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=tech@virtualopensystems.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).