From: Pekka Enberg <penberg@cs.helsinki.fi>
To: Asias He <asias.hejun@gmail.com>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>, Ingo Molnar <mingo@elte.hu>,
Sasha Levin <levinsasha928@gmail.com>,
Prasad Joshi <prasadjoshi124@gmail.com>,
kvm@vger.kernel.org
Subject: Re: [PATCH 1/2] kvm tools: Bring VIRTIO_BLK_F_SEG_MAX feature back to virtio blk
Date: Fri, 13 May 2011 15:46:43 +0300 [thread overview]
Message-ID: <4DCD2833.7080803@cs.helsinki.fi> (raw)
In-Reply-To: <1305254409-9079-1-git-send-email-asias.hejun@gmail.com>
On 5/13/11 5:40 AM, Asias He wrote:
> commit b764422bb0b46b00b896f6d4538ac3d3dde9e56b
> (kvm tools: Add support for multiple virtio-blk)
> removed the VIRTIO_BLK_F_SEG_MAX publishment to guest.
>
> There is no reason we should not support it. Just bring it back.
>
> Signed-off-by: Asias He<asias.hejun@gmail.com>
Sasha?
> ---
> tools/kvm/virtio/blk.c | 19 +++++++++++++++----
> 1 files changed, 15 insertions(+), 4 deletions(-)
>
> diff --git a/tools/kvm/virtio/blk.c b/tools/kvm/virtio/blk.c
> index 5085f1b..8740bc4 100644
> --- a/tools/kvm/virtio/blk.c
> +++ b/tools/kvm/virtio/blk.c
> @@ -21,6 +21,10 @@
> #define NUM_VIRT_QUEUES 1
>
> #define VIRTIO_BLK_QUEUE_SIZE 128
> +/*
> + * the header and status consume too entries
> + */
> +#define DISK_SEG_MAX (VIRTIO_BLK_QUEUE_SIZE - 2)
>
> struct blk_dev_job {
> struct virt_queue *vq;
> @@ -278,11 +282,12 @@ void virtio_blk__init(struct kvm *kvm, struct disk_image *disk)
> blk_dev_base_addr = IOPORT_VIRTIO_BLK + new_dev_idx * IOPORT_VIRTIO_BLK_SIZE;
>
> *bdev = (struct blk_dev) {
> - .mutex = PTHREAD_MUTEX_INITIALIZER,
> - .disk = disk,
> - .idx = new_dev_idx,
> - .blk_config = (struct virtio_blk_config) {
> + .mutex = PTHREAD_MUTEX_INITIALIZER,
> + .disk = disk,
> + .idx = new_dev_idx,
> + .blk_config = (struct virtio_blk_config) {
> .capacity = disk->size / SECTOR_SIZE,
> + .seg_max = DISK_SEG_MAX,
> },
> .pci_hdr = (struct pci_device_header) {
> .vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET,
> @@ -294,6 +299,12 @@ void virtio_blk__init(struct kvm *kvm, struct disk_image *disk)
> .subsys_id = PCI_SUBSYSTEM_ID_VIRTIO_BLK,
> .bar[0] = blk_dev_base_addr | PCI_BASE_ADDRESS_SPACE_IO,
> },
> + /*
> + * Note we don't set VIRTIO_BLK_F_GEOMETRY here so the
> + * guest kernel will compute disk geometry by own, the
> + * same applies to VIRTIO_BLK_F_BLK_SIZE
> + */
> + .host_features = (1UL<< VIRTIO_BLK_F_SEG_MAX),
> };
>
> if (irq__register_device(PCI_DEVICE_ID_VIRTIO_BLK,&dev,&pin,&line)< 0)
next prev parent reply other threads:[~2011-05-13 12:46 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-05-13 2:40 [PATCH 1/2] kvm tools: Bring VIRTIO_BLK_F_SEG_MAX feature back to virtio blk Asias He
2011-05-13 2:40 ` [PATCH 2/2] kvm tools: Tune the command-line option Asias He
2011-05-13 12:46 ` Pekka Enberg [this message]
2011-05-13 14:07 ` [PATCH 1/2] kvm tools: Bring VIRTIO_BLK_F_SEG_MAX feature back to virtio blk Sasha Levin
2011-05-13 14:34 ` Asias He
2011-05-13 15:26 ` Sasha Levin
2011-05-13 16:16 ` Asias He
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4DCD2833.7080803@cs.helsinki.fi \
--to=penberg@cs.helsinki.fi \
--cc=asias.hejun@gmail.com \
--cc=gorcunov@gmail.com \
--cc=kvm@vger.kernel.org \
--cc=levinsasha928@gmail.com \
--cc=mingo@elte.hu \
--cc=prasadjoshi124@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox