qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Yishai Hadas <yishaih@nvidia.com>
To: <alex.williamson@redhat.com>
Cc: <jgg@nvidia.com>, <qemu-devel@nongnu.org>, <kwankhede@nvidia.com>,
	<cohuck@redhat.com>, <yishaih@nvidia.com>, <maorg@nvidia.com>
Subject: [PATCH] vfio/migration: Improve to read/write full migration region per chunk
Date: Thu, 11 Nov 2021 11:50:40 +0200	[thread overview]
Message-ID: <20211111095040.183977-1-yishaih@nvidia.com> (raw)

Upon reading/writing the migration data there is no real reason to limit
the read/write system call from the file to be 8 bytes.

In addition, there is no reason to depend on the file offset alignment.
The offset is just some logical value which depends also on the region
index and has nothing to do with the amount of data that can be
accessed.

Move to read/write the full region size per chunk, this reduces
dramatically the number of the systems calls that are needed and improve
performance.

Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
---
 hw/vfio/migration.c | 36 ++----------------------------------
 1 file changed, 2 insertions(+), 34 deletions(-)

diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index ff6b45de6b5..b5f310bb831 100644
--- a/hw/vfio/migration.c
+++ b/hw/vfio/migration.c
@@ -62,40 +62,8 @@ static inline int vfio_mig_access(VFIODevice *vbasedev, void *val, int count,
     return 0;
 }
 
-static int vfio_mig_rw(VFIODevice *vbasedev, __u8 *buf, size_t count,
-                       off_t off, bool iswrite)
-{
-    int ret, done = 0;
-    __u8 *tbuf = buf;
-
-    while (count) {
-        int bytes = 0;
-
-        if (count >= 8 && !(off % 8)) {
-            bytes = 8;
-        } else if (count >= 4 && !(off % 4)) {
-            bytes = 4;
-        } else if (count >= 2 && !(off % 2)) {
-            bytes = 2;
-        } else {
-            bytes = 1;
-        }
-
-        ret = vfio_mig_access(vbasedev, tbuf, bytes, off, iswrite);
-        if (ret) {
-            return ret;
-        }
-
-        count -= bytes;
-        done += bytes;
-        off += bytes;
-        tbuf += bytes;
-    }
-    return done;
-}
-
-#define vfio_mig_read(f, v, c, o)       vfio_mig_rw(f, (__u8 *)v, c, o, false)
-#define vfio_mig_write(f, v, c, o)      vfio_mig_rw(f, (__u8 *)v, c, o, true)
+#define vfio_mig_read(f, v, c, o)       vfio_mig_access(f, (__u8 *)v, c, o, false)
+#define vfio_mig_write(f, v, c, o)      vfio_mig_access(f, (__u8 *)v, c, o, true)
 
 #define VFIO_MIG_STRUCT_OFFSET(f)       \
                                  offsetof(struct vfio_device_migration_info, f)
-- 
2.18.1



             reply	other threads:[~2021-11-11 14:10 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-11  9:50 Yishai Hadas [this message]
2021-11-22  7:40 ` [PATCH] vfio/migration: Improve to read/write full migration region per chunk Yishai Hadas
2021-11-22 13:38   ` Alex Williamson
2021-11-25 16:33 ` Cornelia Huck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211111095040.183977-1-yishaih@nvidia.com \
    --to=yishaih@nvidia.com \
    --cc=alex.williamson@redhat.com \
    --cc=cohuck@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=kwankhede@nvidia.com \
    --cc=maorg@nvidia.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).