qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] block: propagate discard alignment from format drivers to the guest
@ 2020-06-11 17:16 Denis V. Lunev
  2020-06-11 17:16 ` [PATCH 1/2] " Denis V. Lunev
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Denis V. Lunev @ 2020-06-11 17:16 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: Kevin Wolf, Fam Zheng, Eduardo Habkost, Max Reitz, Paolo Bonzini,
	Denis V . Lunev, John Snow

Nowaday SCSI drivers in guests are able to align UNMAP requests before
sending to the device. Right now QEMU provides an ability to set
this via "discard_granularity" property of the block device which could
be used by management layer.

Though, in particular, from the point of QEMU, there is
pdiscard_granularity on the format driver level, f.e. on QCOW2 or iSCSI.
It would be beneficial to pass this value as a default for this
property.

Technically this should reduce the amount of use less UNMAP requests
from the guest to the host. Basic test confirms this. Fedora 31 guest
during 'fstrim /' on 32 Gb disk has issued 401/415 requests with/without
proper alignment to QEMU.

Changes from v2:
- 172 iotest fixed

Changes from v1:
- fixed typos in description
- added machine type compatibility layer as suggested by Kevin

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Eduardo Habkost <ehabkost@redhat.com>
CC: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
CC: John Snow <jsnow@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Fam Zheng <fam@euphon.net>




^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2] block: propagate discard alignment from format drivers to the guest
  2020-06-11 17:16 [PATCH 0/2] block: propagate discard alignment from format drivers to the guest Denis V. Lunev
@ 2020-06-11 17:16 ` Denis V. Lunev
  2020-06-11 17:16 ` [PATCH 2/2] iotests: fix 172 test Denis V. Lunev
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Denis V. Lunev @ 2020-06-11 17:16 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: Kevin Wolf, Fam Zheng, Eduardo Habkost, Max Reitz, Paolo Bonzini,
	Denis V. Lunev, John Snow

Nowaday SCSI drivers in guests are able to align UNMAP requests before
sending to the device. Right now QEMU provides an ability to set
this via "discard_granularity" property of the block device which could
be used by management layer.

Though, in particular, from the point of QEMU, there is
pdiscard_granularity on the format driver level, f.e. on QCOW2 or iSCSI.
It would be beneficial to pass this value as a default for this
property.

Technically this should reduce the amount of use less UNMAP requests
from the guest to the host. Basic test confirms this.

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Eduardo Habkost <ehabkost@redhat.com>
CC: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
CC: John Snow <jsnow@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Fam Zheng <fam@euphon.net>
---
 block/block-backend.c          | 11 +++++++++++
 hw/core/machine.c              | 15 ++++++++++++++-
 hw/ide/qdev.c                  |  3 ++-
 hw/scsi/scsi-disk.c            |  5 ++++-
 include/hw/block/block.h       |  2 +-
 include/sysemu/block-backend.h |  6 ++++++
 6 files changed, 38 insertions(+), 4 deletions(-)

diff --git a/block/block-backend.c b/block/block-backend.c
index 6936b25c83..9342a475cb 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -2222,6 +2222,17 @@ int blk_probe_geometry(BlockBackend *blk, HDGeometry *geo)
     return bdrv_probe_geometry(blk_bs(blk), geo);
 }
 
+int blk_discard_granularity(BlockBackend *blk)
+{
+    BlockDriverState *bs = blk_bs(blk);
+
+    if (bs == NULL) {
+        return DEFAULT_DISCARD_GRANULARITY;
+    }
+
+    return bs->bl.pdiscard_alignment;
+}
+
 /*
  * Updates the BlockBackendRootState object with data from the currently
  * attached BlockDriverState.
diff --git a/hw/core/machine.c b/hw/core/machine.c
index bb3a7b18b1..08a242d606 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -28,7 +28,20 @@
 #include "hw/mem/nvdimm.h"
 #include "migration/vmstate.h"
 
-GlobalProperty hw_compat_5_0[] = {};
+GlobalProperty hw_compat_5_0[] = {
+    { "ide-cd", "discard_granularity", "0xffffffff" },
+    { "ide-hd", "discard_granularity", "0xffffffff" },
+    { "ide-drive", "discard_granularity", "0xffffffff" },
+    { "scsi-hd", "discard_granularity", "0xffffffff" },
+    { "scsi-cd", "discard_granularity", "0xffffffff" },
+    { "scsi-disk", "discard_granularity", "0xffffffff" },
+    { "virtio-blk-pci", "discard_granularity", "0xffffffff" },
+    { "xen-block", "discard_granularity", "0xffffffff" },
+    { "usb-storage", "discard_granularity", "0xffffffff" },
+    { "swim-drive", "discard_granularity", "0xffffffff" },
+    { "floppy", "discard_granularity", "0xffffffff" },
+    { "nvme", "discard_granularity", "0xffffffff" },
+};
 const size_t hw_compat_5_0_len = G_N_ELEMENTS(hw_compat_5_0);
 
 GlobalProperty hw_compat_4_2[] = {
diff --git a/hw/ide/qdev.c b/hw/ide/qdev.c
index 06b11583f5..e515dbeb0e 100644
--- a/hw/ide/qdev.c
+++ b/hw/ide/qdev.c
@@ -179,7 +179,8 @@ static void ide_dev_initfn(IDEDevice *dev, IDEDriveKind kind, Error **errp)
         }
     }
 
-    if (dev->conf.discard_granularity == -1) {
+    if (dev->conf.discard_granularity == -1 ||
+        dev->conf.discard_granularity == -2) {
         dev->conf.discard_granularity = 512;
     } else if (dev->conf.discard_granularity &&
                dev->conf.discard_granularity != 512) {
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 387503e11b..6b809608e4 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -48,7 +48,6 @@
 #define SCSI_MAX_INQUIRY_LEN        256
 #define SCSI_MAX_MODE_LEN           256
 
-#define DEFAULT_DISCARD_GRANULARITY (4 * KiB)
 #define DEFAULT_MAX_UNMAP_SIZE      (1 * GiB)
 #define DEFAULT_MAX_IO_SIZE         INT_MAX     /* 2 GB - 1 block */
 
@@ -2381,6 +2380,10 @@ static void scsi_realize(SCSIDevice *dev, Error **errp)
     if (s->qdev.conf.discard_granularity == -1) {
         s->qdev.conf.discard_granularity =
             MAX(s->qdev.conf.logical_block_size, DEFAULT_DISCARD_GRANULARITY);
+    } else if (s->qdev.conf.discard_granularity == -2) {
+        s->qdev.conf.discard_granularity =
+            MAX(s->qdev.conf.logical_block_size,
+                blk_discard_granularity(s->qdev.conf.blk));
     }
 
     if (!s->version) {
diff --git a/include/hw/block/block.h b/include/hw/block/block.h
index d7246f3862..53d4a38044 100644
--- a/include/hw/block/block.h
+++ b/include/hw/block/block.h
@@ -54,7 +54,7 @@ static inline unsigned int get_physical_block_exp(BlockConf *conf)
     DEFINE_PROP_UINT16("min_io_size", _state, _conf.min_io_size, 0),    \
     DEFINE_PROP_UINT32("opt_io_size", _state, _conf.opt_io_size, 0),    \
     DEFINE_PROP_UINT32("discard_granularity", _state,                   \
-                       _conf.discard_granularity, -1),                  \
+                       _conf.discard_granularity, -2),                  \
     DEFINE_PROP_ON_OFF_AUTO("write-cache", _state, _conf.wce,           \
                             ON_OFF_AUTO_AUTO),                          \
     DEFINE_PROP_BOOL("share-rw", _state, _conf.share_rw, false)
diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h
index 8203d7f6f9..241a759432 100644
--- a/include/sysemu/block-backend.h
+++ b/include/sysemu/block-backend.h
@@ -13,6 +13,7 @@
 #ifndef BLOCK_BACKEND_H
 #define BLOCK_BACKEND_H
 
+#include "qemu/units.h"
 #include "qemu/iov.h"
 #include "block/throttle-groups.h"
 
@@ -25,6 +26,10 @@
  */
 #include "block/block.h"
 
+
+#define DEFAULT_DISCARD_GRANULARITY (4 * KiB)
+
+
 /* Callbacks for block device models */
 typedef struct BlockDevOps {
     /*
@@ -246,6 +251,7 @@ int blk_save_vmstate(BlockBackend *blk, const uint8_t *buf,
 int blk_load_vmstate(BlockBackend *blk, uint8_t *buf, int64_t pos, int size);
 int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz);
 int blk_probe_geometry(BlockBackend *blk, HDGeometry *geo);
+int blk_discard_granularity(BlockBackend *blk);
 BlockAIOCB *blk_abort_aio_request(BlockBackend *blk,
                                   BlockCompletionFunc *cb,
                                   void *opaque, int ret);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2] iotests: fix 172 test
  2020-06-11 17:16 [PATCH 0/2] block: propagate discard alignment from format drivers to the guest Denis V. Lunev
  2020-06-11 17:16 ` [PATCH 1/2] " Denis V. Lunev
@ 2020-06-11 17:16 ` Denis V. Lunev
  2020-06-11 17:21 ` pls consider this is [v3] Re: [PATCH 0/2] block: propagate discard alignment from format drivers to the guest Denis V. Lunev
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Denis V. Lunev @ 2020-06-11 17:16 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: Kevin Wolf, Fam Zheng, Eduardo Habkost, Max Reitz, Paolo Bonzini,
	Denis V. Lunev, John Snow

Default discard granularity is set to -2 now.

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Eduardo Habkost <ehabkost@redhat.com>
CC: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
CC: John Snow <jsnow@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Fam Zheng <fam@euphon.net>
---
 tests/qemu-iotests/172.out | 106 ++++++++++++++++++-------------------
 1 file changed, 53 insertions(+), 53 deletions(-)

diff --git a/tests/qemu-iotests/172.out b/tests/qemu-iotests/172.out
index 7abbe82427..fb6d3efe7b 100644
--- a/tests/qemu-iotests/172.out
+++ b/tests/qemu-iotests/172.out
@@ -28,7 +28,7 @@ Testing:
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -58,7 +58,7 @@ Testing: -fda TEST_DIR/t.qcow2
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -85,7 +85,7 @@ Testing: -fdb TEST_DIR/t.qcow2
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -96,7 +96,7 @@ Testing: -fdb TEST_DIR/t.qcow2
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -123,7 +123,7 @@ Testing: -fda TEST_DIR/t.qcow2 -fdb TEST_DIR/t.qcow2.2
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -134,7 +134,7 @@ Testing: -fda TEST_DIR/t.qcow2 -fdb TEST_DIR/t.qcow2.2
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -164,7 +164,7 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -191,7 +191,7 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2,index=1
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -202,7 +202,7 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2,index=1
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -229,7 +229,7 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=floppy,file=TEST_DIR/t
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -240,7 +240,7 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=floppy,file=TEST_DIR/t
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -270,7 +270,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -global isa-fdc.driveA=none0
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -297,7 +297,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -global isa-fdc.driveB=none0
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -324,7 +324,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -335,7 +335,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -365,7 +365,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -392,7 +392,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,unit=1
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -419,7 +419,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -430,7 +430,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -460,7 +460,7 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -471,7 +471,7 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -498,7 +498,7 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -509,7 +509,7 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -536,7 +536,7 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -563,7 +563,7 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -593,7 +593,7 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -604,7 +604,7 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -631,7 +631,7 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -642,7 +642,7 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -669,7 +669,7 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -680,7 +680,7 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -707,7 +707,7 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -718,7 +718,7 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -754,7 +754,7 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -765,7 +765,7 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -792,7 +792,7 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -803,7 +803,7 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -836,7 +836,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -847,7 +847,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -874,7 +874,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -885,7 +885,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -912,7 +912,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -923,7 +923,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -950,7 +950,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -961,7 +961,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -1003,7 +1003,7 @@ Testing: -device floppy
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -1030,7 +1030,7 @@ Testing: -device floppy,drive-type=120
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "120"
@@ -1057,7 +1057,7 @@ Testing: -device floppy,drive-type=144
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -1084,7 +1084,7 @@ Testing: -device floppy,drive-type=288
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -1114,7 +1114,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,drive-t
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "120"
@@ -1141,7 +1141,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,drive-t
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -1171,7 +1171,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,logical
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -1198,7 +1198,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physica
                 physical_block_size = 512 (0x200)
                 min_io_size = 0 (0x0)
                 opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                discard_granularity = 4294967294 (0xfffffffe)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* pls consider this is [v3] Re: [PATCH 0/2] block: propagate discard alignment from format drivers to the guest
  2020-06-11 17:16 [PATCH 0/2] block: propagate discard alignment from format drivers to the guest Denis V. Lunev
  2020-06-11 17:16 ` [PATCH 1/2] " Denis V. Lunev
  2020-06-11 17:16 ` [PATCH 2/2] iotests: fix 172 test Denis V. Lunev
@ 2020-06-11 17:21 ` Denis V. Lunev
  2020-06-19  8:38   ` Denis V. Lunev
  2020-06-19 16:20 ` Eduardo Habkost
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: Denis V. Lunev @ 2020-06-11 17:21 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: Kevin Wolf, Fam Zheng, Eduardo Habkost, Max Reitz, Paolo Bonzini,
	John Snow

On 6/11/20 8:16 PM, Denis V. Lunev wrote:
> Nowaday SCSI drivers in guests are able to align UNMAP requests before
> sending to the device. Right now QEMU provides an ability to set
> this via "discard_granularity" property of the block device which could
> be used by management layer.
>
> Though, in particular, from the point of QEMU, there is
> pdiscard_granularity on the format driver level, f.e. on QCOW2 or iSCSI.
> It would be beneficial to pass this value as a default for this
> property.
>
> Technically this should reduce the amount of use less UNMAP requests
> from the guest to the host. Basic test confirms this. Fedora 31 guest
> during 'fstrim /' on 32 Gb disk has issued 401/415 requests with/without
> proper alignment to QEMU.
>
> Changes from v2:
> - 172 iotest fixed
>
> Changes from v1:
> - fixed typos in description
> - added machine type compatibility layer as suggested by Kevin
>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Kevin Wolf <kwolf@redhat.com>
> CC: Max Reitz <mreitz@redhat.com>
> CC: Eduardo Habkost <ehabkost@redhat.com>
> CC: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
> CC: John Snow <jsnow@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Fam Zheng <fam@euphon.net>
>
>
Sorry for missed v3 tag in the subject :(


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: pls consider this is [v3] Re: [PATCH 0/2] block: propagate discard alignment from format drivers to the guest
  2020-06-11 17:21 ` pls consider this is [v3] Re: [PATCH 0/2] block: propagate discard alignment from format drivers to the guest Denis V. Lunev
@ 2020-06-19  8:38   ` Denis V. Lunev
  0 siblings, 0 replies; 9+ messages in thread
From: Denis V. Lunev @ 2020-06-19  8:38 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: Kevin Wolf, Fam Zheng, Eduardo Habkost, Max Reitz, Paolo Bonzini,
	John Snow

On 6/11/20 8:21 PM, Denis V. Lunev wrote:
> On 6/11/20 8:16 PM, Denis V. Lunev wrote:
>> Nowaday SCSI drivers in guests are able to align UNMAP requests before
>> sending to the device. Right now QEMU provides an ability to set
>> this via "discard_granularity" property of the block device which could
>> be used by management layer.
>>
>> Though, in particular, from the point of QEMU, there is
>> pdiscard_granularity on the format driver level, f.e. on QCOW2 or iSCSI.
>> It would be beneficial to pass this value as a default for this
>> property.
>>
>> Technically this should reduce the amount of use less UNMAP requests
>> from the guest to the host. Basic test confirms this. Fedora 31 guest
>> during 'fstrim /' on 32 Gb disk has issued 401/415 requests with/without
>> proper alignment to QEMU.
>>
>> Changes from v2:
>> - 172 iotest fixed
>>
>> Changes from v1:
>> - fixed typos in description
>> - added machine type compatibility layer as suggested by Kevin
>>
>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>> CC: Kevin Wolf <kwolf@redhat.com>
>> CC: Max Reitz <mreitz@redhat.com>
>> CC: Eduardo Habkost <ehabkost@redhat.com>
>> CC: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
>> CC: John Snow <jsnow@redhat.com>
>> CC: Paolo Bonzini <pbonzini@redhat.com>
>> CC: Fam Zheng <fam@euphon.net>
>>
>>
> Sorry for missed v3 tag in the subject :(
ping


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/2] block: propagate discard alignment from format drivers to the guest
  2020-06-11 17:16 [PATCH 0/2] block: propagate discard alignment from format drivers to the guest Denis V. Lunev
                   ` (2 preceding siblings ...)
  2020-06-11 17:21 ` pls consider this is [v3] Re: [PATCH 0/2] block: propagate discard alignment from format drivers to the guest Denis V. Lunev
@ 2020-06-19 16:20 ` Eduardo Habkost
  2020-06-19 16:27   ` Denis V. Lunev
  2020-06-26  8:17 ` Denis V. Lunev
  2020-07-03 17:36 ` Denis V. Lunev
  5 siblings, 1 reply; 9+ messages in thread
From: Eduardo Habkost @ 2020-06-19 16:20 UTC (permalink / raw)
  To: Denis V. Lunev
  Cc: Kevin Wolf, Fam Zheng, qemu-block, qemu-devel, Max Reitz,
	Paolo Bonzini, John Snow

On Thu, Jun 11, 2020 at 08:16:06PM +0300, Denis V. Lunev wrote:
> Nowaday SCSI drivers in guests are able to align UNMAP requests before
> sending to the device. Right now QEMU provides an ability to set
> this via "discard_granularity" property of the block device which could
> be used by management layer.
> 
> Though, in particular, from the point of QEMU, there is
> pdiscard_granularity on the format driver level, f.e. on QCOW2 or iSCSI.
> It would be beneficial to pass this value as a default for this
> property.

I assume the value is visible to the guest.  What is supposed to
happen if live migrating and the block backend is a different one
on the destination?

Also, don't we have mechanisms to change the block backend change
at run time?  What should happen in that case?

> 
> Technically this should reduce the amount of use less UNMAP requests
> from the guest to the host. Basic test confirms this. Fedora 31 guest
> during 'fstrim /' on 32 Gb disk has issued 401/415 requests with/without
> proper alignment to QEMU.
> 
> Changes from v2:
> - 172 iotest fixed
> 
> Changes from v1:
> - fixed typos in description
> - added machine type compatibility layer as suggested by Kevin
> 
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Kevin Wolf <kwolf@redhat.com>
> CC: Max Reitz <mreitz@redhat.com>
> CC: Eduardo Habkost <ehabkost@redhat.com>
> CC: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
> CC: John Snow <jsnow@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Fam Zheng <fam@euphon.net>
> 
> 

-- 
Eduardo



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/2] block: propagate discard alignment from format drivers to the guest
  2020-06-19 16:20 ` Eduardo Habkost
@ 2020-06-19 16:27   ` Denis V. Lunev
  0 siblings, 0 replies; 9+ messages in thread
From: Denis V. Lunev @ 2020-06-19 16:27 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: Kevin Wolf, Fam Zheng, qemu-block, qemu-devel, Max Reitz,
	Paolo Bonzini, John Snow

On 6/19/20 7:20 PM, Eduardo Habkost wrote:
> On Thu, Jun 11, 2020 at 08:16:06PM +0300, Denis V. Lunev wrote:
>> Nowaday SCSI drivers in guests are able to align UNMAP requests before
>> sending to the device. Right now QEMU provides an ability to set
>> this via "discard_granularity" property of the block device which could
>> be used by management layer.
>>
>> Though, in particular, from the point of QEMU, there is
>> pdiscard_granularity on the format driver level, f.e. on QCOW2 or iSCSI.
>> It would be beneficial to pass this value as a default for this
>> property.
> I assume the value is visible to the guest.  What is supposed to
> happen if live migrating and the block backend is a different one
> on the destination?
>
> Also, don't we have mechanisms to change the block backend change
> at run time?  What should happen in that case?
First of all, I think that this should be very rare case.
Though nothing bad is expected. The quest will see
old value, i.e. one negotiated at guest startup.

Let us assume that block backend has been changed
and discard alignment is
- less than set. In this case the guest will continue to
  send larger than possible requests, i.e. some blocks
  will not be discarded as it could happen. This will
  happen until the guest restarts and see smaller
  alignment. First re-trim will discard all non-discarded
  so far blocks
- greater than set. The code will work like now, i.e.
  some extra requests will be sent.

Den


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/2] block: propagate discard alignment from format drivers to the guest
  2020-06-11 17:16 [PATCH 0/2] block: propagate discard alignment from format drivers to the guest Denis V. Lunev
                   ` (3 preceding siblings ...)
  2020-06-19 16:20 ` Eduardo Habkost
@ 2020-06-26  8:17 ` Denis V. Lunev
  2020-07-03 17:36 ` Denis V. Lunev
  5 siblings, 0 replies; 9+ messages in thread
From: Denis V. Lunev @ 2020-06-26  8:17 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: Kevin Wolf, Fam Zheng, Eduardo Habkost, Max Reitz, Paolo Bonzini,
	John Snow

On 6/11/20 8:16 PM, Denis V. Lunev wrote:
> Nowaday SCSI drivers in guests are able to align UNMAP requests before
> sending to the device. Right now QEMU provides an ability to set
> this via "discard_granularity" property of the block device which could
> be used by management layer.
>
> Though, in particular, from the point of QEMU, there is
> pdiscard_granularity on the format driver level, f.e. on QCOW2 or iSCSI.
> It would be beneficial to pass this value as a default for this
> property.
>
> Technically this should reduce the amount of use less UNMAP requests
> from the guest to the host. Basic test confirms this. Fedora 31 guest
> during 'fstrim /' on 32 Gb disk has issued 401/415 requests with/without
> proper alignment to QEMU.
>
> Changes from v2:
> - 172 iotest fixed
>
> Changes from v1:
> - fixed typos in description
> - added machine type compatibility layer as suggested by Kevin
>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Kevin Wolf <kwolf@redhat.com>
> CC: Max Reitz <mreitz@redhat.com>
> CC: Eduardo Habkost <ehabkost@redhat.com>
> CC: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
> CC: John Snow <jsnow@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Fam Zheng <fam@euphon.net>
>
>
ping v2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/2] block: propagate discard alignment from format drivers to the guest
  2020-06-11 17:16 [PATCH 0/2] block: propagate discard alignment from format drivers to the guest Denis V. Lunev
                   ` (4 preceding siblings ...)
  2020-06-26  8:17 ` Denis V. Lunev
@ 2020-07-03 17:36 ` Denis V. Lunev
  5 siblings, 0 replies; 9+ messages in thread
From: Denis V. Lunev @ 2020-07-03 17:36 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: Kevin Wolf, Fam Zheng, Eduardo Habkost, Max Reitz, Paolo Bonzini,
	John Snow

On 6/11/20 8:16 PM, Denis V. Lunev wrote:
> Nowaday SCSI drivers in guests are able to align UNMAP requests before
> sending to the device. Right now QEMU provides an ability to set
> this via "discard_granularity" property of the block device which could
> be used by management layer.
>
> Though, in particular, from the point of QEMU, there is
> pdiscard_granularity on the format driver level, f.e. on QCOW2 or iSCSI.
> It would be beneficial to pass this value as a default for this
> property.
>
> Technically this should reduce the amount of use less UNMAP requests
> from the guest to the host. Basic test confirms this. Fedora 31 guest
> during 'fstrim /' on 32 Gb disk has issued 401/415 requests with/without
> proper alignment to QEMU.
>
> Changes from v2:
> - 172 iotest fixed
>
> Changes from v1:
> - fixed typos in description
> - added machine type compatibility layer as suggested by Kevin
>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Kevin Wolf <kwolf@redhat.com>
> CC: Max Reitz <mreitz@redhat.com>
> CC: Eduardo Habkost <ehabkost@redhat.com>
> CC: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
> CC: John Snow <jsnow@redhat.com>
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Fam Zheng <fam@euphon.net>
>
>
ping v3


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-07-03 17:37 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-06-11 17:16 [PATCH 0/2] block: propagate discard alignment from format drivers to the guest Denis V. Lunev
2020-06-11 17:16 ` [PATCH 1/2] " Denis V. Lunev
2020-06-11 17:16 ` [PATCH 2/2] iotests: fix 172 test Denis V. Lunev
2020-06-11 17:21 ` pls consider this is [v3] Re: [PATCH 0/2] block: propagate discard alignment from format drivers to the guest Denis V. Lunev
2020-06-19  8:38   ` Denis V. Lunev
2020-06-19 16:20 ` Eduardo Habkost
2020-06-19 16:27   ` Denis V. Lunev
2020-06-26  8:17 ` Denis V. Lunev
2020-07-03 17:36 ` Denis V. Lunev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).