* IO speed limited by size of IO request (for RBD driver)
@ 2013-04-23 13:33 Sylvain Munaut
2013-04-23 13:41 ` Steven Haigh
0 siblings, 1 reply; 25+ messages in thread
From: Sylvain Munaut @ 2013-04-23 13:33 UTC (permalink / raw)
To: xen-devel
Hi,
I was observing a pretty severe performance impact when using Xen VM
with RBD (Ceph) backed storage, especially when doing large sequential
access.
And I think I finally found a major cause for it: even large user
space requests seem to be split into small requests of 11 * 4096
bytes. ( 44k )
This is caused by :
blk_queue_max_segments(rq, BLKIF_MAX_SEGMENTS_PER_REQUEST);
blk_queue_max_segment_size(rq, PAGE_SIZE);
What's the impact of modifying those ?
I've seen some justification for the BLKIF_MAX_SEGMENTS_PER_REQUEST
limit, but why limit segment size to page_size ?
Cheers,
Sylvain
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-23 13:33 IO speed limited by size of IO request (for RBD driver) Sylvain Munaut
@ 2013-04-23 13:41 ` Steven Haigh
2013-04-23 14:06 ` Roger Pau Monné
0 siblings, 1 reply; 25+ messages in thread
From: Steven Haigh @ 2013-04-23 13:41 UTC (permalink / raw)
To: xen-devel
On 23/04/2013 11:33 PM, Sylvain Munaut wrote:
> Hi,
>
>
> I was observing a pretty severe performance impact when using Xen VM
> with RBD (Ceph) backed storage, especially when doing large sequential
> access.
>
> And I think I finally found a major cause for it: even large user
> space requests seem to be split into small requests of 11 * 4096
> bytes. ( 44k )
>
> This is caused by :
>
> blk_queue_max_segments(rq, BLKIF_MAX_SEGMENTS_PER_REQUEST);
> blk_queue_max_segment_size(rq, PAGE_SIZE);
>
> What's the impact of modifying those ?
> I've seen some justification for the BLKIF_MAX_SEGMENTS_PER_REQUEST
> limit, but why limit segment size to page_size ?
I'm seeing the same as you - see the thread in the archives over the
last few weeks - subject "RE: Xen disk write slowness in kernel 3.8.x"
I get ~50MB/sec max write speeds due to probably the same problem.
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-23 13:41 ` Steven Haigh
@ 2013-04-23 14:06 ` Roger Pau Monné
2013-04-23 14:15 ` Sylvain Munaut
` (2 more replies)
0 siblings, 3 replies; 25+ messages in thread
From: Roger Pau Monné @ 2013-04-23 14:06 UTC (permalink / raw)
To: Steven Haigh; +Cc: xen-devel@lists.xen.org
On 23/04/13 15:41, Steven Haigh wrote:
> On 23/04/2013 11:33 PM, Sylvain Munaut wrote:
>> Hi,
>>
>>
>> I was observing a pretty severe performance impact when using Xen VM
>> with RBD (Ceph) backed storage, especially when doing large sequential
>> access.
>>
>> And I think I finally found a major cause for it: even large user
>> space requests seem to be split into small requests of 11 * 4096
>> bytes. ( 44k )
>>
>> This is caused by :
>>
>> blk_queue_max_segments(rq, BLKIF_MAX_SEGMENTS_PER_REQUEST);
>> blk_queue_max_segment_size(rq, PAGE_SIZE);
>>
>> What's the impact of modifying those ?
>> I've seen some justification for the BLKIF_MAX_SEGMENTS_PER_REQUEST
>> limit, but why limit segment size to page_size ?
>
> I'm seeing the same as you - see the thread in the archives over the
> last few weeks - subject "RE: Xen disk write slowness in kernel 3.8.x"
>
> I get ~50MB/sec max write speeds due to probably the same problem.
When using Ceph, are you using the Linux kernel backend (blkback), Qemu
or blktap?
I've been working on expanding the number of segments that a request can
hold, and the patches just went upstream for the next Linux kernel
(3.10), you might want to test them, they can be found in the following
git repo:
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
branch for-jens-3.10
You will need to use them in both DomU and Dom0 in order to get more
segments in a request.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-23 14:06 ` Roger Pau Monné
@ 2013-04-23 14:15 ` Sylvain Munaut
2013-04-25 13:00 ` Sylvain Munaut
[not found] ` <51769B9D.4000708@crc.id.au>
2 siblings, 0 replies; 25+ messages in thread
From: Sylvain Munaut @ 2013-04-23 14:15 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Steven Haigh, xen-devel@lists.xen.org
Hi,
> When using Ceph, are you using the Linux kernel backend (blkback), Qemu
> or blktap?
I've tried both :
- Using the RBD kernel driver in dom0 and using phy:/dev/rbd/xxx/xxx
in the vm config
- Using a custom blktap driver
and I'm observing the same level of performance with both. And it's
while debugging that custom blktap driver, trying to find bottle neck
that I stumbled on the 44k limit.
> I've been working on expanding the number of segments that a request can
> hold, and the patches just went upstream for the next Linux kernel
> (3.10), you might want to test them, they can be found in the following
> git repo:
Oh, indeed, that's really interesting, I'll give it a shot. I'm not
sure how realistic it'll be for me to update all the VM kernel to
something that recent but at least I'll confirm the problem.
Cheers,
Sylvain
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-23 14:06 ` Roger Pau Monné
2013-04-23 14:15 ` Sylvain Munaut
@ 2013-04-25 13:00 ` Sylvain Munaut
[not found] ` <51769B9D.4000708@crc.id.au>
2 siblings, 0 replies; 25+ messages in thread
From: Sylvain Munaut @ 2013-04-25 13:00 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Steven Haigh, xen-devel@lists.xen.org
Hi,
> I've been working on expanding the number of segments that a request can
> hold, and the patches just went upstream for the next Linux kernel
> (3.10), you might want to test them, they can be found in the following
> git repo:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
>
> branch for-jens-3.10
>
> You will need to use them in both DomU and Dom0 in order to get more
> segments in a request.
I read about this change on the ML and the code and won't that also
require changes to blktap kernel driver and the userspace as well to
use the improvements ?
Cheers,
Sylvain
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
[not found] ` <517A89DA.3030804@citrix.com>
@ 2013-04-26 14:16 ` Steven Haigh
2013-04-27 1:57 ` Steven Haigh
0 siblings, 1 reply; 25+ messages in thread
From: Steven Haigh @ 2013-04-26 14:16 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Felipe Franciosi, xen-devel
On 27/04/2013 12:06 AM, Roger Pau Monné wrote:
> On 23/04/13 21:05, Steven Haigh wrote:
>> Sorry - resending this to Felipe as well - as I started talking to him
>> directly previously.
>>
>> Felipe, to bring you up to date, I've copied over the blkback files from
>> Rogers indirect kernel over the vanilla 3.8.8 kernel files, built and
>> tested. Results below:
>>
Bringing this into context in a nutshell - results showed about 5MB/sec
improvement when using buffered disk access - totalling ~57MB/sec write
speed vs ~98MB/sec when using the oflag=direct flag to dd.
When talking about back porting a few indirect patches to mainline
blkback (3.8.8 atm):
>> On 24/04/2013 4:13 AM, Roger Pau Monné wrote:
>>> I think it requires a non-trivial amount of work, what you could do as a
>>> test is directly replace the affected files with the ones in my tree, it
>>> is not optimal, but I don't think it's going to cause problems, and you
>>> could at least see if indirect descriptors solve your problem.
>>
>> Ok, I copied across those files, built, packaged and installed them on
>> my Dom0. Good news is that its a little quicker, bad news is not by much.
>
> Could you try increasing xen_blkif_max_segments variable in
> xen-blkfront.c to 64 or 128? It is set to 32 by default. You will only
> need to recompile the DomU kernel after this, the Dom0 is able to
> support up to 256 indirect segments.
I'll have to look at this. All DomU's are Scientific Linux 6.4 systems -
so essentially RHEL6.4 and so on. I haven't built a RH kernel as yet -
so I'll have to look at what is involved. It might be as simple as
rebuilding a normal SRPM.
> Also, I think we should bring this conversation back to xen-devel.
Agreed.
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-26 14:16 ` Steven Haigh
@ 2013-04-27 1:57 ` Steven Haigh
2013-04-27 7:06 ` Roger Pau Monné
0 siblings, 1 reply; 25+ messages in thread
From: Steven Haigh @ 2013-04-27 1:57 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Felipe Franciosi, xen-devel
[-- Attachment #1: Type: text/plain, Size: 2126 bytes --]
On 27/04/2013 12:16 AM, Steven Haigh wrote:
> On 27/04/2013 12:06 AM, Roger Pau Monné wrote:
>> On 23/04/13 21:05, Steven Haigh wrote:
>>> Sorry - resending this to Felipe as well - as I started talking to him
>>> directly previously.
>>>
>>> Felipe, to bring you up to date, I've copied over the blkback files from
>>> Rogers indirect kernel over the vanilla 3.8.8 kernel files, built and
>>> tested. Results below:
>>>
>
> Bringing this into context in a nutshell - results showed about 5MB/sec
> improvement when using buffered disk access - totalling ~57MB/sec write
> speed vs ~98MB/sec when using the oflag=direct flag to dd.
>
> When talking about back porting a few indirect patches to mainline
> blkback (3.8.8 atm):
>>> On 24/04/2013 4:13 AM, Roger Pau Monné wrote:
>>>> I think it requires a non-trivial amount of work, what you could do
>>>> as a
>>>> test is directly replace the affected files with the ones in my
>>>> tree, it
>>>> is not optimal, but I don't think it's going to cause problems, and you
>>>> could at least see if indirect descriptors solve your problem.
>>>
>>> Ok, I copied across those files, built, packaged and installed them on
>>> my Dom0. Good news is that its a little quicker, bad news is not by
>>> much.
>>
>> Could you try increasing xen_blkif_max_segments variable in
>> xen-blkfront.c to 64 or 128? It is set to 32 by default. You will only
>> need to recompile the DomU kernel after this, the Dom0 is able to
>> support up to 256 indirect segments.
>
> I'll have to look at this. All DomU's are Scientific Linux 6.4 systems -
> so essentially RHEL6.4 and so on. I haven't built a RH kernel as yet -
> so I'll have to look at what is involved. It might be as simple as
> rebuilding a normal SRPM.
Ok, I've had a look at the RH xen-blkfront.c - and I can't see any
definition of xen_blkif_max_segments - or anything close. I've attached
the version used in the EL6 kernel from the kernel-2.6.32-358.6.1.el6 srpm.
Any ideas on where to go from here?
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
[-- Attachment #2: xen-blkfront.c --]
[-- Type: text/plain, Size: 35081 bytes --]
/*
* blkfront.c
*
* XenLinux virtual block device driver.
*
* Copyright (c) 2003-2004, Keir Fraser & Steve Hand
* Modifications by Mark A. Williamson are (c) Intel Research Cambridge
* Copyright (c) 2004, Christian Limpach
* Copyright (c) 2004, Andrew Warfield
* Copyright (c) 2005, Christopher Clark
* Copyright (c) 2005, XenSource Ltd
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License version 2
* as published by the Free Software Foundation; or, when distributed
* separately from the Linux kernel or incorporated into other
* software packages, subject to the following license:
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this source file (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy, modify,
* merge, publish, distribute, sublicense, and/or sell copies of the Software,
* and to permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#include <linux/interrupt.h>
#include <linux/blkdev.h>
#include <linux/hdreg.h>
#include <linux/cdrom.h>
#include <linux/module.h>
#include <linux/scatterlist.h>
#include <xen/xenbus.h>
#include <xen/grant_table.h>
#include <xen/events.h>
#include <xen/page.h>
#include <xen/platform_pci.h>
#include <xen/interface/grant_table.h>
#include <xen/interface/io/blkif.h>
#include <xen/interface/io/protocols.h>
#include <asm/xen/hypervisor.h>
static int sda_is_xvda;
module_param(sda_is_xvda, bool, 0);
MODULE_PARM_DESC(sda_is_xvda,
"sdX in guest config translates to xvdX, not xvd(X+4)");
enum blkif_state {
BLKIF_STATE_DISCONNECTED,
BLKIF_STATE_CONNECTED,
BLKIF_STATE_SUSPENDED,
};
struct blk_shadow {
struct blkif_request req;
unsigned long request;
unsigned long frame[BLKIF_MAX_SEGMENTS_PER_REQUEST];
};
static const struct block_device_operations xlvbd_block_fops;
#define BLK_RING_SIZE __RING_SIZE((struct blkif_sring *)0, PAGE_SIZE)
/*
* We have one of these per vbd, whether ide, scsi or 'other'. They
* hang in private_data off the gendisk structure. We may end up
* putting all kinds of interesting stuff here :-)
*/
struct blkfront_info
{
struct mutex mutex;
struct xenbus_device *xbdev;
struct gendisk *gd;
int vdevice;
blkif_vdev_t handle;
enum blkif_state connected;
int ring_ref;
struct blkif_front_ring ring;
struct scatterlist sg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
unsigned int evtchn, irq;
struct request_queue *rq;
struct work_struct work;
struct gnttab_free_callback callback;
struct blk_shadow shadow[BLK_RING_SIZE];
unsigned long shadow_free;
unsigned int feature_flush;
int is_ready;
};
static DEFINE_SPINLOCK(blkif_io_lock);
static unsigned int nr_minors;
static unsigned long *minors;
static DEFINE_SPINLOCK(minor_lock);
#define MAXIMUM_OUTSTANDING_BLOCK_REQS \
(BLKIF_MAX_SEGMENTS_PER_REQUEST * BLK_RING_SIZE)
#define GRANT_INVALID_REF 0
#define PARTS_PER_DISK 16
#define PARTS_PER_EXT_DISK 256
#define BLKIF_MAJOR(dev) ((dev)>>8)
#define BLKIF_MINOR(dev) ((dev) & 0xff)
#define EXT_SHIFT 28
#define EXTENDED (1<<EXT_SHIFT)
#define VDEV_IS_EXTENDED(dev) ((dev)&(EXTENDED))
#define BLKIF_MINOR_EXT(dev) ((dev)&(~EXTENDED))
#define EMULATED_HD_DISK_MINOR_OFFSET (0)
#define EMULATED_HD_DISK_NAME_OFFSET (EMULATED_HD_DISK_MINOR_OFFSET / 256)
#define DEV_NAME "xvd" /* name in /dev */
/* module settings dependent on the "sda_is_xvda" module parameter */
static int emulated_sd_disk_minor_offset = EMULATED_HD_DISK_MINOR_OFFSET + (4 * 16);
static int emulated_sd_disk_name_offset = EMULATED_HD_DISK_NAME_OFFSET + 4;
static int get_id_from_freelist(struct blkfront_info *info)
{
unsigned long free = info->shadow_free;
BUG_ON(free >= BLK_RING_SIZE);
info->shadow_free = info->shadow[free].req.id;
info->shadow[free].req.id = 0x0fffffee; /* debug */
return free;
}
static void add_id_to_freelist(struct blkfront_info *info,
unsigned long id)
{
info->shadow[id].req.id = info->shadow_free;
info->shadow[id].request = 0;
info->shadow_free = id;
}
static int xlbd_reserve_minors(unsigned int minor, unsigned int nr)
{
unsigned int end = minor + nr;
int rc;
if (end > nr_minors) {
unsigned long *bitmap, *old;
bitmap = kzalloc(BITS_TO_LONGS(end) * sizeof(*bitmap),
GFP_KERNEL);
if (bitmap == NULL)
return -ENOMEM;
spin_lock(&minor_lock);
if (end > nr_minors) {
old = minors;
memcpy(bitmap, minors,
BITS_TO_LONGS(nr_minors) * sizeof(*bitmap));
minors = bitmap;
nr_minors = BITS_TO_LONGS(end) * BITS_PER_LONG;
} else
old = bitmap;
spin_unlock(&minor_lock);
kfree(old);
}
spin_lock(&minor_lock);
if (find_next_bit(minors, end, minor) >= end) {
for (; minor < end; ++minor)
__set_bit(minor, minors);
rc = 0;
} else
rc = -EBUSY;
spin_unlock(&minor_lock);
return rc;
}
static void xlbd_release_minors(unsigned int minor, unsigned int nr)
{
unsigned int end = minor + nr;
BUG_ON(end > nr_minors);
spin_lock(&minor_lock);
for (; minor < end; ++minor)
__clear_bit(minor, minors);
spin_unlock(&minor_lock);
}
static void blkif_restart_queue_callback(void *arg)
{
struct blkfront_info *info = (struct blkfront_info *)arg;
schedule_work(&info->work);
}
static int blkif_getgeo(struct block_device *bd, struct hd_geometry *hg)
{
/* We don't have real geometry info, but let's at least return
values consistent with the size of the device */
sector_t nsect = get_capacity(bd->bd_disk);
sector_t cylinders = nsect;
hg->heads = 0xff;
hg->sectors = 0x3f;
sector_div(cylinders, hg->heads * hg->sectors);
hg->cylinders = cylinders;
if ((sector_t)(hg->cylinders + 1) * hg->heads * hg->sectors < nsect)
hg->cylinders = 0xffff;
return 0;
}
static int blkif_ioctl(struct block_device *bdev, fmode_t mode,
unsigned command, unsigned long argument)
{
struct blkfront_info *info = bdev->bd_disk->private_data;
int i;
dev_dbg(&info->xbdev->dev, "command: 0x%x, argument: 0x%lx\n",
command, (long)argument);
switch (command) {
case CDROMMULTISESSION:
dev_dbg(&info->xbdev->dev, "FIXME: support multisession CDs later\n");
for (i = 0; i < sizeof(struct cdrom_multisession); i++)
if (put_user(0, (char __user *)(argument + i)))
return -EFAULT;
return 0;
case CDROM_GET_CAPABILITY: {
struct gendisk *gd = info->gd;
if (gd->flags & GENHD_FL_CD)
return 0;
return -EINVAL;
}
default:
/*printk(KERN_ALERT "ioctl %08x not supported by Xen blkdev\n",
command);*/
return -EINVAL; /* same return as native Linux */
}
return 0;
}
/*
* blkif_queue_request
*
* request block io
*
* id: for guest use only.
* operation: BLKIF_OP_{READ,WRITE,PROBE}
* buffer: buffer to read/write into. this should be a
* virtual address in the guest os.
*/
static int blkif_queue_request(struct request *req)
{
struct blkfront_info *info = req->rq_disk->private_data;
unsigned long buffer_mfn;
struct blkif_request *ring_req;
unsigned long id;
unsigned int fsect, lsect;
int i, ref;
grant_ref_t gref_head;
struct scatterlist *sg;
if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
return 1;
if (gnttab_alloc_grant_references(
BLKIF_MAX_SEGMENTS_PER_REQUEST, &gref_head) < 0) {
gnttab_request_free_callback(
&info->callback,
blkif_restart_queue_callback,
info,
BLKIF_MAX_SEGMENTS_PER_REQUEST);
return 1;
}
/* Fill out a communications ring structure. */
ring_req = RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt);
id = get_id_from_freelist(info);
info->shadow[id].request = (unsigned long)req;
ring_req->id = id;
ring_req->sector_number = (blkif_sector_t)blk_rq_pos(req);
ring_req->handle = info->handle;
ring_req->operation = rq_data_dir(req) ?
BLKIF_OP_WRITE : BLKIF_OP_READ;
if (req->cmd_flags & REQ_HARDBARRIER)
ring_req->operation = BLKIF_OP_WRITE_BARRIER;
ring_req->nr_segments = blk_rq_map_sg(req->q, req, info->sg);
BUG_ON(ring_req->nr_segments > BLKIF_MAX_SEGMENTS_PER_REQUEST);
for_each_sg(info->sg, sg, ring_req->nr_segments, i) {
buffer_mfn = pfn_to_mfn(page_to_pfn(sg_page(sg)));
fsect = sg->offset >> 9;
lsect = fsect + (sg->length >> 9) - 1;
/* install a grant reference. */
ref = gnttab_claim_grant_reference(&gref_head);
BUG_ON(ref == -ENOSPC);
gnttab_grant_foreign_access_ref(
ref,
info->xbdev->otherend_id,
buffer_mfn,
rq_data_dir(req) );
info->shadow[id].frame[i] = mfn_to_pfn(buffer_mfn);
ring_req->seg[i] =
(struct blkif_request_segment) {
.gref = ref,
.first_sect = fsect,
.last_sect = lsect };
}
info->ring.req_prod_pvt++;
/* Keep a private copy so we can reissue requests when recovering. */
info->shadow[id].req = *ring_req;
gnttab_free_grant_references(gref_head);
return 0;
}
static inline void flush_requests(struct blkfront_info *info)
{
int notify;
RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->ring, notify);
if (notify)
notify_remote_via_irq(info->irq);
}
/*
* do_blkif_request
* read a block; request is in a request queue
*/
static void do_blkif_request(struct request_queue *rq)
{
struct blkfront_info *info = NULL;
struct request *req;
int queued;
pr_debug("Entered do_blkif_request\n");
queued = 0;
while ((req = blk_peek_request(rq)) != NULL) {
info = req->rq_disk->private_data;
if (RING_FULL(&info->ring))
goto wait;
blk_start_request(req);
if (req->cmd_type != REQ_TYPE_FS) {
__blk_end_request_all(req, -EIO);
continue;
}
pr_debug("do_blk_req %p: cmd %p, sec %lx, "
"(%u/%u) buffer:%p [%s]\n",
req, req->cmd, (unsigned long)blk_rq_pos(req),
blk_rq_cur_sectors(req), blk_rq_sectors(req),
req->buffer, rq_data_dir(req) ? "write" : "read");
if (blkif_queue_request(req)) {
blk_requeue_request(rq, req);
wait:
/* Avoid pointless unplugs. */
blk_stop_queue(rq);
break;
}
queued++;
}
if (queued != 0)
flush_requests(info);
}
static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size)
{
struct request_queue *rq;
rq = blk_init_queue(do_blkif_request, &blkif_io_lock);
if (rq == NULL)
return -1;
queue_flag_set_unlocked(QUEUE_FLAG_VIRT, rq);
/* Hard sector size and max sectors impersonate the equiv. hardware. */
blk_queue_logical_block_size(rq, sector_size);
blk_queue_max_hw_sectors(rq, 512);
/* Each segment in a request is up to an aligned page in size. */
blk_queue_segment_boundary(rq, PAGE_SIZE - 1);
blk_queue_max_segment_size(rq, PAGE_SIZE);
/* Ensure a merged request will fit in a single I/O ring slot. */
blk_queue_max_segments(rq, BLKIF_MAX_SEGMENTS_PER_REQUEST);
/* Make sure buffer addresses are sector-aligned. */
blk_queue_dma_alignment(rq, 511);
/* Make sure we don't use bounce buffers. */
blk_queue_bounce_limit(rq, BLK_BOUNCE_ANY);
gd->queue = rq;
return 0;
}
static void xlvbd_flush(struct blkfront_info *info)
{
blk_queue_flush(info->rq, info->feature_flush);
printk(KERN_INFO "blkfront: %s: barriers %s\n",
info->gd->disk_name,
info->feature_flush ? "enabled" : "disabled");
}
static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)
{
int major;
major = BLKIF_MAJOR(vdevice);
*minor = BLKIF_MINOR(vdevice);
switch (major) {
case XEN_IDE0_MAJOR:
*offset = (*minor / 64) + EMULATED_HD_DISK_NAME_OFFSET;
*minor = ((*minor / 64) * PARTS_PER_DISK) +
EMULATED_HD_DISK_MINOR_OFFSET;
break;
case XEN_IDE1_MAJOR:
*offset = (*minor / 64) + 2 + EMULATED_HD_DISK_NAME_OFFSET;
*minor = (((*minor / 64) + 2) * PARTS_PER_DISK) +
EMULATED_HD_DISK_MINOR_OFFSET;
break;
case XEN_SCSI_DISK0_MAJOR:
*offset = (*minor / PARTS_PER_DISK) + emulated_sd_disk_name_offset;
*minor = *minor + emulated_sd_disk_minor_offset;
break;
case XEN_SCSI_DISK1_MAJOR:
case XEN_SCSI_DISK2_MAJOR:
case XEN_SCSI_DISK3_MAJOR:
case XEN_SCSI_DISK4_MAJOR:
case XEN_SCSI_DISK5_MAJOR:
case XEN_SCSI_DISK6_MAJOR:
case XEN_SCSI_DISK7_MAJOR:
*offset = (*minor / PARTS_PER_DISK) +
((major - XEN_SCSI_DISK1_MAJOR + 1) * 16) +
emulated_sd_disk_name_offset;
*minor = *minor +
((major - XEN_SCSI_DISK1_MAJOR + 1) * 16 * PARTS_PER_DISK) +
emulated_sd_disk_minor_offset;
break;
case XEN_SCSI_DISK8_MAJOR:
case XEN_SCSI_DISK9_MAJOR:
case XEN_SCSI_DISK10_MAJOR:
case XEN_SCSI_DISK11_MAJOR:
case XEN_SCSI_DISK12_MAJOR:
case XEN_SCSI_DISK13_MAJOR:
case XEN_SCSI_DISK14_MAJOR:
case XEN_SCSI_DISK15_MAJOR:
*offset = (*minor / PARTS_PER_DISK) +
((major - XEN_SCSI_DISK8_MAJOR + 8) * 16) +
emulated_sd_disk_name_offset;
*minor = *minor +
((major - XEN_SCSI_DISK8_MAJOR + 8) * 16 * PARTS_PER_DISK) +
emulated_sd_disk_minor_offset;
break;
case XENVBD_MAJOR:
*offset = *minor / PARTS_PER_DISK;
break;
default:
printk(KERN_WARNING "blkfront: your disk configuration is "
"incorrect, please use an xvd device instead\n");
return -ENODEV;
}
return 0;
}
static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
struct blkfront_info *info,
u16 vdisk_info, u16 sector_size)
{
struct gendisk *gd;
int nr_minors = 1;
int err;
unsigned int offset;
int minor;
int nr_parts;
BUG_ON(info->gd != NULL);
BUG_ON(info->rq != NULL);
if ((info->vdevice>>EXT_SHIFT) > 1) {
/* this is above the extended range; something is wrong */
printk(KERN_WARNING "blkfront: vdevice 0x%x is above the extended range; ignoring\n", info->vdevice);
return -ENODEV;
}
if (!VDEV_IS_EXTENDED(info->vdevice)) {
err = xen_translate_vdev(info->vdevice, &minor, &offset);
if (err)
return err;
nr_parts = PARTS_PER_DISK;
} else {
minor = BLKIF_MINOR_EXT(info->vdevice);
nr_parts = PARTS_PER_EXT_DISK;
offset = minor / nr_parts;
if (xen_hvm_domain() && offset < EMULATED_HD_DISK_NAME_OFFSET + 4)
printk(KERN_WARNING "blkfront: vdevice 0x%x might conflict with "
"emulated IDE disks,\n\t choose an xvd device name"
"from xvde on\n", info->vdevice);
}
err = -ENODEV;
if ((minor % nr_parts) == 0)
nr_minors = nr_parts;
err = xlbd_reserve_minors(minor, nr_minors);
if (err)
goto out;
err = -ENODEV;
gd = alloc_disk(nr_minors);
if (gd == NULL)
goto release;
if (nr_minors > 1) {
if (offset < 26)
sprintf(gd->disk_name, "%s%c", DEV_NAME, 'a' + offset);
else
sprintf(gd->disk_name, "%s%c%c", DEV_NAME,
'a' + ((offset / 26)-1), 'a' + (offset % 26));
} else {
if (offset < 26)
sprintf(gd->disk_name, "%s%c%d", DEV_NAME,
'a' + offset,
minor & (nr_parts - 1));
else
sprintf(gd->disk_name, "%s%c%c%d", DEV_NAME,
'a' + ((offset / 26) - 1),
'a' + (offset % 26),
minor & (nr_parts - 1));
}
gd->major = XENVBD_MAJOR;
gd->first_minor = minor;
gd->fops = &xlvbd_block_fops;
gd->private_data = info;
gd->driverfs_dev = &(info->xbdev->dev);
set_capacity(gd, capacity);
if (xlvbd_init_blk_queue(gd, sector_size)) {
del_gendisk(gd);
goto release;
}
info->rq = gd->queue;
info->gd = gd;
xlvbd_flush(info);
if (vdisk_info & VDISK_READONLY)
set_disk_ro(gd, 1);
if (vdisk_info & VDISK_REMOVABLE)
gd->flags |= GENHD_FL_REMOVABLE;
if (vdisk_info & VDISK_CDROM)
gd->flags |= GENHD_FL_CD;
return 0;
release:
xlbd_release_minors(minor, nr_minors);
out:
return err;
}
static void xlvbd_release_gendisk(struct blkfront_info *info)
{
unsigned int minor, nr_minors;
unsigned long flags;
if (info->rq == NULL)
return;
spin_lock_irqsave(&blkif_io_lock, flags);
/* No more blkif_request(). */
blk_stop_queue(info->rq);
/* No more gnttab callback work. */
gnttab_cancel_free_callback(&info->callback);
spin_unlock_irqrestore(&blkif_io_lock, flags);
/* Flush gnttab callback work. Must be done with no locks held. */
flush_scheduled_work();
del_gendisk(info->gd);
minor = info->gd->first_minor;
nr_minors = info->gd->minors;
xlbd_release_minors(minor, nr_minors);
blk_cleanup_queue(info->rq);
info->rq = NULL;
put_disk(info->gd);
info->gd = NULL;
}
static void kick_pending_request_queues(struct blkfront_info *info)
{
if (!RING_FULL(&info->ring)) {
/* Re-enable calldowns. */
blk_start_queue(info->rq);
/* Kick things off immediately. */
do_blkif_request(info->rq);
}
}
static void blkif_restart_queue(struct work_struct *work)
{
struct blkfront_info *info = container_of(work, struct blkfront_info, work);
spin_lock_irq(&blkif_io_lock);
if (info->connected == BLKIF_STATE_CONNECTED)
kick_pending_request_queues(info);
spin_unlock_irq(&blkif_io_lock);
}
static void blkif_free(struct blkfront_info *info, int suspend)
{
/* Prevent new requests being issued until we fix things up. */
spin_lock_irq(&blkif_io_lock);
info->connected = suspend ?
BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
/* No more blkif_request(). */
if (info->rq)
blk_stop_queue(info->rq);
/* No more gnttab callback work. */
gnttab_cancel_free_callback(&info->callback);
spin_unlock_irq(&blkif_io_lock);
/* Flush gnttab callback work. Must be done with no locks held. */
flush_scheduled_work();
/* Free resources associated with old device channel. */
if (info->ring_ref != GRANT_INVALID_REF) {
gnttab_end_foreign_access(info->ring_ref, 0,
(unsigned long)info->ring.sring);
info->ring_ref = GRANT_INVALID_REF;
info->ring.sring = NULL;
}
if (info->irq)
unbind_from_irqhandler(info->irq, info);
info->evtchn = info->irq = 0;
}
static void blkif_completion(struct blk_shadow *s)
{
int i;
for (i = 0; i < s->req.nr_segments; i++)
gnttab_end_foreign_access(s->req.seg[i].gref, 0, 0UL);
}
static irqreturn_t blkif_interrupt(int irq, void *dev_id)
{
struct request *req;
struct blkif_response *bret;
RING_IDX i, rp;
unsigned long flags;
struct blkfront_info *info = (struct blkfront_info *)dev_id;
int error;
spin_lock_irqsave(&blkif_io_lock, flags);
if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
spin_unlock_irqrestore(&blkif_io_lock, flags);
return IRQ_HANDLED;
}
again:
rp = info->ring.sring->rsp_prod;
rmb(); /* Ensure we see queued responses up to 'rp'. */
for (i = info->ring.rsp_cons; i != rp; i++) {
unsigned long id;
bret = RING_GET_RESPONSE(&info->ring, i);
id = bret->id;
req = (struct request *)info->shadow[id].request;
blkif_completion(&info->shadow[id]);
add_id_to_freelist(info, id);
error = (bret->status == BLKIF_RSP_OKAY) ? 0 : -EIO;
switch (bret->operation) {
case BLKIF_OP_WRITE_BARRIER:
if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
printk(KERN_WARNING "blkfront: %s: write barrier op failed\n",
info->gd->disk_name);
error = -EOPNOTSUPP;
info->feature_flush = 0;
xlvbd_flush(info);
}
/* fall through */
case BLKIF_OP_READ:
case BLKIF_OP_WRITE:
if (unlikely(bret->status != BLKIF_RSP_OKAY))
dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
"request: %x\n", bret->status);
__blk_end_request_all(req, error);
break;
default:
BUG();
}
}
info->ring.rsp_cons = i;
if (i != info->ring.req_prod_pvt) {
int more_to_do;
RING_FINAL_CHECK_FOR_RESPONSES(&info->ring, more_to_do);
if (more_to_do)
goto again;
} else
info->ring.sring->rsp_event = i + 1;
kick_pending_request_queues(info);
spin_unlock_irqrestore(&blkif_io_lock, flags);
return IRQ_HANDLED;
}
static int setup_blkring(struct xenbus_device *dev,
struct blkfront_info *info)
{
struct blkif_sring *sring;
int err;
info->ring_ref = GRANT_INVALID_REF;
sring = (struct blkif_sring *)__get_free_page(GFP_NOIO | __GFP_HIGH);
if (!sring) {
xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
return -ENOMEM;
}
SHARED_RING_INIT(sring);
FRONT_RING_INIT(&info->ring, sring, PAGE_SIZE);
sg_init_table(info->sg, BLKIF_MAX_SEGMENTS_PER_REQUEST);
err = xenbus_grant_ring(dev, virt_to_mfn(info->ring.sring));
if (err < 0) {
free_page((unsigned long)sring);
info->ring.sring = NULL;
goto fail;
}
info->ring_ref = err;
err = xenbus_alloc_evtchn(dev, &info->evtchn);
if (err)
goto fail;
err = bind_evtchn_to_irqhandler(info->evtchn,
blkif_interrupt,
IRQF_SAMPLE_RANDOM, "blkif", info);
if (err <= 0) {
xenbus_dev_fatal(dev, err,
"bind_evtchn_to_irqhandler failed");
goto fail;
}
info->irq = err;
return 0;
fail:
blkif_free(info, 0);
return err;
}
/* Common code used when first setting up, and when resuming. */
static int talk_to_blkback(struct xenbus_device *dev,
struct blkfront_info *info)
{
const char *message = NULL;
struct xenbus_transaction xbt;
int err;
/* Create shared ring, alloc event channel. */
err = setup_blkring(dev, info);
if (err)
goto out;
again:
err = xenbus_transaction_start(&xbt);
if (err) {
xenbus_dev_fatal(dev, err, "starting transaction");
goto destroy_blkring;
}
err = xenbus_printf(xbt, dev->nodename,
"ring-ref", "%u", info->ring_ref);
if (err) {
message = "writing ring-ref";
goto abort_transaction;
}
err = xenbus_printf(xbt, dev->nodename,
"event-channel", "%u", info->evtchn);
if (err) {
message = "writing event-channel";
goto abort_transaction;
}
err = xenbus_printf(xbt, dev->nodename, "protocol", "%s",
XEN_IO_PROTO_ABI_NATIVE);
if (err) {
message = "writing protocol";
goto abort_transaction;
}
err = xenbus_transaction_end(xbt, 0);
if (err) {
if (err == -EAGAIN)
goto again;
xenbus_dev_fatal(dev, err, "completing transaction");
goto destroy_blkring;
}
xenbus_switch_state(dev, XenbusStateInitialised);
return 0;
abort_transaction:
xenbus_transaction_end(xbt, 1);
if (message)
xenbus_dev_fatal(dev, err, "%s", message);
destroy_blkring:
blkif_free(info, 0);
out:
return err;
}
/**
* Entry point to this code when a new device is created. Allocate the basic
* structures and the ring buffer for communication with the backend, and
* inform the backend of the appropriate details for those. Switch to
* Initialised state.
*/
static int blkfront_probe(struct xenbus_device *dev,
const struct xenbus_device_id *id)
{
int err, vdevice, i;
struct blkfront_info *info;
/* FIXME: Use dynamic device id if this is not set. */
err = xenbus_scanf(XBT_NIL, dev->nodename,
"virtual-device", "%i", &vdevice);
if (err != 1) {
/* go looking in the extended area instead */
err = xenbus_scanf(XBT_NIL, dev->nodename, "virtual-device-ext",
"%i", &vdevice);
if (err != 1) {
xenbus_dev_fatal(dev, err, "reading virtual-device");
return err;
}
}
/*
* Prevent hooking up IDE if ide-unplug not supported;
* Never hook up SCSI devices on pv-on-hvm guest
*/
if (xen_hvm_domain()) {
extern int xen_ide_unplug_unsupported;
int major;
int cfg = 1;
char* type;
int len;
if (!VDEV_IS_EXTENDED(vdevice))
major = BLKIF_MAJOR(vdevice);
else
major = XENVBD_MAJOR;
switch(major) {
case IDE0_MAJOR:
case IDE1_MAJOR:
case IDE2_MAJOR:
case IDE3_MAJOR:
case IDE4_MAJOR:
case IDE5_MAJOR:
case IDE6_MAJOR:
case IDE7_MAJOR:
case IDE8_MAJOR:
case IDE9_MAJOR:
if (xen_ide_unplug_unsupported)
cfg = 0;
break;
case SCSI_DISK0_MAJOR:
case SCSI_DISK1_MAJOR:
case SCSI_DISK2_MAJOR:
case SCSI_DISK3_MAJOR:
case SCSI_DISK4_MAJOR:
case SCSI_DISK5_MAJOR:
case SCSI_DISK6_MAJOR:
case SCSI_DISK7_MAJOR:
case SCSI_DISK8_MAJOR:
case SCSI_DISK9_MAJOR:
case SCSI_DISK10_MAJOR:
case SCSI_DISK11_MAJOR:
case SCSI_DISK12_MAJOR:
case SCSI_DISK13_MAJOR:
case SCSI_DISK14_MAJOR:
case SCSI_DISK15_MAJOR:
cfg = 0;
break;
}
if (cfg == 0) {
printk(KERN_INFO
"%s: HVM does not support vbd %d as xen block device\n",
__FUNCTION__, vdevice);
return -ENODEV;
}
/* do not create a PV cdrom device if we are an HVM guest */
type = xenbus_read(XBT_NIL, dev->nodename, "device-type", &len);
if (IS_ERR(type))
return -ENODEV;
if (strncmp(type, "cdrom", 5) == 0) {
kfree(type);
return -ENODEV;
}
kfree(type);
}
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info) {
xenbus_dev_fatal(dev, -ENOMEM, "allocating info structure");
return -ENOMEM;
}
mutex_init(&info->mutex);
info->xbdev = dev;
info->vdevice = vdevice;
info->connected = BLKIF_STATE_DISCONNECTED;
INIT_WORK(&info->work, blkif_restart_queue);
for (i = 0; i < BLK_RING_SIZE; i++)
info->shadow[i].req.id = i+1;
info->shadow[BLK_RING_SIZE-1].req.id = 0x0fffffff;
/* Front end dir is a number, which is used as the id. */
info->handle = simple_strtoul(strrchr(dev->nodename, '/')+1, NULL, 0);
dev_set_drvdata(&dev->dev, info);
err = talk_to_blkback(dev, info);
if (err) {
kfree(info);
dev_set_drvdata(&dev->dev, NULL);
return err;
}
return 0;
}
static int blkif_recover(struct blkfront_info *info)
{
int i;
struct blkif_request *req;
struct blk_shadow *copy;
int j;
/* Stage 1: Make a safe copy of the shadow state. */
copy = kmalloc(sizeof(info->shadow),
GFP_NOIO | __GFP_REPEAT | __GFP_HIGH);
if (!copy)
return -ENOMEM;
memcpy(copy, info->shadow, sizeof(info->shadow));
/* Stage 2: Set up free list. */
memset(&info->shadow, 0, sizeof(info->shadow));
for (i = 0; i < BLK_RING_SIZE; i++)
info->shadow[i].req.id = i+1;
info->shadow_free = info->ring.req_prod_pvt;
info->shadow[BLK_RING_SIZE-1].req.id = 0x0fffffff;
/* Stage 3: Find pending requests and requeue them. */
for (i = 0; i < BLK_RING_SIZE; i++) {
/* Not in use? */
if (copy[i].request == 0)
continue;
/* Grab a request slot and copy shadow state into it. */
req = RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt);
*req = copy[i].req;
/* We get a new request id, and must reset the shadow state. */
req->id = get_id_from_freelist(info);
memcpy(&info->shadow[req->id], ©[i], sizeof(copy[i]));
/* Rewrite any grant references invalidated by susp/resume. */
for (j = 0; j < req->nr_segments; j++)
gnttab_grant_foreign_access_ref(
req->seg[j].gref,
info->xbdev->otherend_id,
pfn_to_mfn(info->shadow[req->id].frame[j]),
rq_data_dir(
(struct request *)
info->shadow[req->id].request));
info->shadow[req->id].req = *req;
info->ring.req_prod_pvt++;
}
kfree(copy);
xenbus_switch_state(info->xbdev, XenbusStateConnected);
spin_lock_irq(&blkif_io_lock);
/* Now safe for us to use the shared ring */
info->connected = BLKIF_STATE_CONNECTED;
/* Send off requeued requests */
flush_requests(info);
/* Kick any other new requests queued since we resumed */
kick_pending_request_queues(info);
spin_unlock_irq(&blkif_io_lock);
return 0;
}
/**
* We are reconnecting to the backend, due to a suspend/resume, or a backend
* driver restart. We tear down our blkif structure and recreate it, but
* leave the device-layer structures intact so that this is transparent to the
* rest of the kernel.
*/
static int blkfront_resume(struct xenbus_device *dev)
{
struct blkfront_info *info = dev_get_drvdata(&dev->dev);
int err;
dev_dbg(&dev->dev, "blkfront_resume: %s\n", dev->nodename);
blkif_free(info, info->connected == BLKIF_STATE_CONNECTED);
err = talk_to_blkback(dev, info);
if (info->connected == BLKIF_STATE_SUSPENDED && !err)
err = blkif_recover(info);
return err;
}
static void
blkfront_closing(struct blkfront_info *info)
{
struct xenbus_device *xbdev = info->xbdev;
struct block_device *bdev = NULL;
mutex_lock(&info->mutex);
if (xbdev->state == XenbusStateClosing) {
mutex_unlock(&info->mutex);
return;
}
if (info->gd)
bdev = bdget_disk(info->gd, 0);
mutex_unlock(&info->mutex);
if (!bdev) {
xenbus_frontend_closed(xbdev);
return;
}
mutex_lock(&bdev->bd_mutex);
if (bdev->bd_openers) {
xenbus_dev_error(xbdev, -EBUSY,
"Device in use; refusing to close");
} else {
xlvbd_release_gendisk(info);
xenbus_frontend_closed(xbdev);
}
mutex_unlock(&bdev->bd_mutex);
bdput(bdev);
}
/*
* Invoked when the backend is finally 'ready' (and has told produced
* the details about the physical device - #sectors, size, etc).
*/
static void blkfront_connect(struct blkfront_info *info)
{
unsigned long long sectors;
unsigned long sector_size;
unsigned int binfo;
int err;
int barrier;
dev_t devt;
switch (info->connected) {
case BLKIF_STATE_CONNECTED:
/*
* Potentially, the back-end may be signalling
* a capacity change; update the capacity.
*/
err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
"sectors", "%Lu", §ors);
if (XENBUS_EXIST_ERR(err))
return;
devt = disk_devt(info->gd);
printk(KERN_INFO "Changing capacity of (%u, %u) to %Lu "
"sectors\n", (unsigned)MAJOR(devt),
(unsigned)MINOR(devt), sectors);
set_capacity(info->gd, sectors);
revalidate_disk(info->gd);
/* fall through */
case BLKIF_STATE_SUSPENDED:
return;
default:
/* keep gcc quiet; ISO C99 6.8.4.2p5, 6.8.3p6 */
;
}
dev_dbg(&info->xbdev->dev, "%s:%s.\n",
__func__, info->xbdev->otherend);
err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
"sectors", "%llu", §ors,
"info", "%u", &binfo,
"sector-size", "%lu", §or_size,
NULL);
if (err) {
xenbus_dev_fatal(info->xbdev, err,
"reading backend fields at %s",
info->xbdev->otherend);
return;
}
err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
"feature-barrier", "%d", &barrier,
NULL);
/*
* If there's no "feature-barrier" defined, then it means
* we're dealing with a very old backend which writes
* synchronously; nothing to do.
*
* If there are barriers, then we use flush.
*/
info->feature_flush = 0;
/*
* The driver doesn't properly handled empty flushes, so
* lets disable barrier support for now.
*/
#if 0
if (!err && barrier)
info->feature_flush = REQ_FLUSH;
#endif
err = xlvbd_alloc_gendisk(sectors, info, binfo, sector_size);
if (err) {
xenbus_dev_fatal(info->xbdev, err, "xlvbd_add at %s",
info->xbdev->otherend);
return;
}
xenbus_switch_state(info->xbdev, XenbusStateConnected);
/* Kick pending requests. */
spin_lock_irq(&blkif_io_lock);
info->connected = BLKIF_STATE_CONNECTED;
kick_pending_request_queues(info);
spin_unlock_irq(&blkif_io_lock);
add_disk(info->gd);
info->is_ready = 1;
}
/**
* Callback received when the backend's state changes.
*/
static void blkback_changed(struct xenbus_device *dev,
enum xenbus_state backend_state)
{
struct blkfront_info *info = dev_get_drvdata(&dev->dev);
dev_dbg(&dev->dev, "blkfront:blkback_changed to state %d.\n", backend_state);
switch (backend_state) {
case XenbusStateInitialising:
case XenbusStateInitWait:
case XenbusStateInitialised:
case XenbusStateUnknown:
case XenbusStateClosed:
break;
case XenbusStateConnected:
blkfront_connect(info);
break;
case XenbusStateClosing:
blkfront_closing(info);
break;
}
}
static int blkfront_remove(struct xenbus_device *xbdev)
{
struct blkfront_info *info = dev_get_drvdata(&xbdev->dev);
struct block_device *bdev = NULL;
struct gendisk *disk;
dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename);
blkif_free(info, 0);
mutex_lock(&info->mutex);
disk = info->gd;
if (disk)
bdev = bdget_disk(disk, 0);
info->xbdev = NULL;
mutex_unlock(&info->mutex);
if (!bdev) {
kfree(info);
return 0;
}
/*
* The xbdev was removed before we reached the Closed
* state. See if it's safe to remove the disk. If the bdev
* isn't closed yet, we let release take care of it.
*/
mutex_lock(&bdev->bd_mutex);
info = disk->private_data;
if (info && !bdev->bd_openers) {
xlvbd_release_gendisk(info);
disk->private_data = NULL;
kfree(info);
}
mutex_unlock(&bdev->bd_mutex);
bdput(bdev);
return 0;
}
static int blkfront_is_ready(struct xenbus_device *dev)
{
struct blkfront_info *info = dev_get_drvdata(&dev->dev);
return info->is_ready && info->xbdev;
}
static int blkif_open(struct block_device *bdev, fmode_t mode)
{
struct gendisk *disk = bdev->bd_disk;
struct blkfront_info *info;
int err = 0;
info = disk->private_data;
if (!info) {
/* xbdev gone */
err = -ERESTARTSYS;
goto out;
}
mutex_lock(&info->mutex);
if (!info->gd)
/* xbdev is closed */
err = -ERESTARTSYS;
mutex_unlock(&info->mutex);
out:
return err;
}
static int blkif_release(struct gendisk *disk, fmode_t mode)
{
struct blkfront_info *info = disk->private_data;
struct block_device *bdev;
struct xenbus_device *xbdev;
bdev = bdget_disk(disk, 0);
if (bdev->bd_openers)
goto out;
/*
* Check if we have been instructed to close. We will have
* deferred this request, because the bdev was still open.
*/
mutex_lock(&info->mutex);
xbdev = info->xbdev;
if (xbdev && xbdev->state == XenbusStateClosing) {
/* pending switch to state closed */
xlvbd_release_gendisk(info);
xenbus_frontend_closed(info->xbdev);
}
mutex_unlock(&info->mutex);
if (!xbdev) {
/* sudden device removal */
xlvbd_release_gendisk(info);
disk->private_data = NULL;
kfree(info);
}
out:
bdput(bdev);
return 0;
}
static const struct block_device_operations xlvbd_block_fops =
{
.owner = THIS_MODULE,
.open = blkif_open,
.release = blkif_release,
.getgeo = blkif_getgeo,
.locked_ioctl = blkif_ioctl,
};
static struct xenbus_device_id blkfront_ids[] = {
{ "vbd" },
{ "" }
};
static struct xenbus_driver blkfront = {
.name = "vbd",
.owner = THIS_MODULE,
.ids = blkfront_ids,
.probe = blkfront_probe,
.remove = blkfront_remove,
.resume = blkfront_resume,
.otherend_changed = blkback_changed,
.is_ready = blkfront_is_ready,
};
static int __init xlblk_init(void)
{
int ret;
if (!xen_domain())
return -ENODEV;
if (xen_hvm_domain() && !xen_platform_pci_unplug)
return -ENODEV;
printk("%s: register_blkdev major: %d \n", __FUNCTION__, XENVBD_MAJOR);
if (register_blkdev(XENVBD_MAJOR, DEV_NAME)) {
printk(KERN_WARNING "xen_blk: can't get major %d with name %s\n",
XENVBD_MAJOR, DEV_NAME);
return -ENODEV;
}
if (sda_is_xvda) {
emulated_sd_disk_minor_offset = 0;
emulated_sd_disk_name_offset = emulated_sd_disk_minor_offset / 256;
}
ret = xenbus_register_frontend(&blkfront);
if (ret) {
unregister_blkdev(XENVBD_MAJOR, DEV_NAME);
return ret;
}
return 0;
}
module_init(xlblk_init);
static void __exit xlblk_exit(void)
{
return xenbus_unregister_driver(&blkfront);
}
module_exit(xlblk_exit);
MODULE_DESCRIPTION("Xen virtual block device frontend");
MODULE_LICENSE("GPL");
MODULE_ALIAS_BLOCKDEV_MAJOR(XENVBD_MAJOR);
MODULE_ALIAS("xen:vbd");
MODULE_ALIAS("xenblk");
[-- Attachment #3: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-27 1:57 ` Steven Haigh
@ 2013-04-27 7:06 ` Roger Pau Monné
2013-04-27 7:51 ` Steven Haigh
0 siblings, 1 reply; 25+ messages in thread
From: Roger Pau Monné @ 2013-04-27 7:06 UTC (permalink / raw)
To: Steven Haigh; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On 27/04/13 03:57, Steven Haigh wrote:
> On 27/04/2013 12:16 AM, Steven Haigh wrote:
>> On 27/04/2013 12:06 AM, Roger Pau Monné wrote:
>>> On 23/04/13 21:05, Steven Haigh wrote:
>>>> Sorry - resending this to Felipe as well - as I started talking to him
>>>> directly previously.
>>>>
>>>> Felipe, to bring you up to date, I've copied over the blkback files from
>>>> Rogers indirect kernel over the vanilla 3.8.8 kernel files, built and
>>>> tested. Results below:
>>>>
>>
>> Bringing this into context in a nutshell - results showed about 5MB/sec
>> improvement when using buffered disk access - totalling ~57MB/sec write
>> speed vs ~98MB/sec when using the oflag=direct flag to dd.
>>
>> When talking about back porting a few indirect patches to mainline
>> blkback (3.8.8 atm):
>>>> On 24/04/2013 4:13 AM, Roger Pau Monné wrote:
>>>>> I think it requires a non-trivial amount of work, what you could do
>>>>> as a
>>>>> test is directly replace the affected files with the ones in my
>>>>> tree, it
>>>>> is not optimal, but I don't think it's going to cause problems, and you
>>>>> could at least see if indirect descriptors solve your problem.
>>>>
>>>> Ok, I copied across those files, built, packaged and installed them on
>>>> my Dom0. Good news is that its a little quicker, bad news is not by
>>>> much.
>>>
>>> Could you try increasing xen_blkif_max_segments variable in
>>> xen-blkfront.c to 64 or 128? It is set to 32 by default. You will only
>>> need to recompile the DomU kernel after this, the Dom0 is able to
>>> support up to 256 indirect segments.
>>
>> I'll have to look at this. All DomU's are Scientific Linux 6.4 systems -
>> so essentially RHEL6.4 and so on. I haven't built a RH kernel as yet -
>> so I'll have to look at what is involved. It might be as simple as
>> rebuilding a normal SRPM.
>
> Ok, I've had a look at the RH xen-blkfront.c - and I can't see any
> definition of xen_blkif_max_segments - or anything close. I've attached
> the version used in the EL6 kernel from the kernel-2.6.32-358.6.1.el6 srpm.
>
> Any ideas on where to go from here?
I thought you were using the 3.8.x kernel inside the DomU also, if you
are not using it, then it's normal that there's no speed difference, you
have a Dom0 kernel that supports indirect descriptors, but your DomU
doesn't. You must use a kernel that supports indirect descriptors in
both Dom0 and DomU in order to make use of this feature.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-27 7:06 ` Roger Pau Monné
@ 2013-04-27 7:51 ` Steven Haigh
2013-04-27 8:35 ` Steven Haigh
0 siblings, 1 reply; 25+ messages in thread
From: Steven Haigh @ 2013-04-27 7:51 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On 27/04/2013 5:06 PM, Roger Pau Monné wrote:
> On 27/04/13 03:57, Steven Haigh wrote:
>> On 27/04/2013 12:16 AM, Steven Haigh wrote:
>>> On 27/04/2013 12:06 AM, Roger Pau Monné wrote:
>>>> On 23/04/13 21:05, Steven Haigh wrote:
>>>>> Sorry - resending this to Felipe as well - as I started talking to him
>>>>> directly previously.
>>>>>
>>>>> Felipe, to bring you up to date, I've copied over the blkback files from
>>>>> Rogers indirect kernel over the vanilla 3.8.8 kernel files, built and
>>>>> tested. Results below:
>>>>>
>>>
>>> Bringing this into context in a nutshell - results showed about 5MB/sec
>>> improvement when using buffered disk access - totalling ~57MB/sec write
>>> speed vs ~98MB/sec when using the oflag=direct flag to dd.
>>>
>>> When talking about back porting a few indirect patches to mainline
>>> blkback (3.8.8 atm):
>>>>> On 24/04/2013 4:13 AM, Roger Pau Monné wrote:
>>>>>> I think it requires a non-trivial amount of work, what you could do
>>>>>> as a
>>>>>> test is directly replace the affected files with the ones in my
>>>>>> tree, it
>>>>>> is not optimal, but I don't think it's going to cause problems, and you
>>>>>> could at least see if indirect descriptors solve your problem.
>>>>>
>>>>> Ok, I copied across those files, built, packaged and installed them on
>>>>> my Dom0. Good news is that its a little quicker, bad news is not by
>>>>> much.
>>>>
>>>> Could you try increasing xen_blkif_max_segments variable in
>>>> xen-blkfront.c to 64 or 128? It is set to 32 by default. You will only
>>>> need to recompile the DomU kernel after this, the Dom0 is able to
>>>> support up to 256 indirect segments.
>>>
>>> I'll have to look at this. All DomU's are Scientific Linux 6.4 systems -
>>> so essentially RHEL6.4 and so on. I haven't built a RH kernel as yet -
>>> so I'll have to look at what is involved. It might be as simple as
>>> rebuilding a normal SRPM.
>>
>> Ok, I've had a look at the RH xen-blkfront.c - and I can't see any
>> definition of xen_blkif_max_segments - or anything close. I've attached
>> the version used in the EL6 kernel from the kernel-2.6.32-358.6.1.el6 srpm.
>>
>> Any ideas on where to go from here?
>
> I thought you were using the 3.8.x kernel inside the DomU also, if you
> are not using it, then it's normal that there's no speed difference, you
> have a Dom0 kernel that supports indirect descriptors, but your DomU
> doesn't. You must use a kernel that supports indirect descriptors in
> both Dom0 and DomU in order to make use of this feature.
Ahhh - sorry - I should have been clearer - The Dom0 is kernel 3.8.x
(3.8.8 right now) - however the DomUs are all stock EL6 kernels.
Hmmmm - I believe the kernel I build for Dom0 *should* work as a DomU.
I'll do some more experimentation and see if I can get it working
properly as a DomU kernel.
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-27 7:51 ` Steven Haigh
@ 2013-04-27 8:35 ` Steven Haigh
2013-04-29 8:38 ` Roger Pau Monné
0 siblings, 1 reply; 25+ messages in thread
From: Steven Haigh @ 2013-04-27 8:35 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On 27/04/2013 5:51 PM, Steven Haigh wrote:
> On 27/04/2013 5:06 PM, Roger Pau Monné wrote:
>> On 27/04/13 03:57, Steven Haigh wrote:
>>> On 27/04/2013 12:16 AM, Steven Haigh wrote:
>>>> On 27/04/2013 12:06 AM, Roger Pau Monné wrote:
>>>>> On 23/04/13 21:05, Steven Haigh wrote:
>>>>>> Sorry - resending this to Felipe as well - as I started talking to
>>>>>> him
>>>>>> directly previously.
>>>>>>
>>>>>> Felipe, to bring you up to date, I've copied over the blkback
>>>>>> files from
>>>>>> Rogers indirect kernel over the vanilla 3.8.8 kernel files, built and
>>>>>> tested. Results below:
>>>>>>
>>>>
>>>> Bringing this into context in a nutshell - results showed about 5MB/sec
>>>> improvement when using buffered disk access - totalling ~57MB/sec write
>>>> speed vs ~98MB/sec when using the oflag=direct flag to dd.
>>>>
>>>> When talking about back porting a few indirect patches to mainline
>>>> blkback (3.8.8 atm):
>>>>>> On 24/04/2013 4:13 AM, Roger Pau Monné wrote:
>>>>>>> I think it requires a non-trivial amount of work, what you could do
>>>>>>> as a
>>>>>>> test is directly replace the affected files with the ones in my
>>>>>>> tree, it
>>>>>>> is not optimal, but I don't think it's going to cause problems,
>>>>>>> and you
>>>>>>> could at least see if indirect descriptors solve your problem.
>>>>>>
>>>>>> Ok, I copied across those files, built, packaged and installed
>>>>>> them on
>>>>>> my Dom0. Good news is that its a little quicker, bad news is not by
>>>>>> much.
>>>>>
>>>>> Could you try increasing xen_blkif_max_segments variable in
>>>>> xen-blkfront.c to 64 or 128? It is set to 32 by default. You will only
>>>>> need to recompile the DomU kernel after this, the Dom0 is able to
>>>>> support up to 256 indirect segments.
>>>>
>>>> I'll have to look at this. All DomU's are Scientific Linux 6.4
>>>> systems -
>>>> so essentially RHEL6.4 and so on. I haven't built a RH kernel as yet -
>>>> so I'll have to look at what is involved. It might be as simple as
>>>> rebuilding a normal SRPM.
>>>
>>> Ok, I've had a look at the RH xen-blkfront.c - and I can't see any
>>> definition of xen_blkif_max_segments - or anything close. I've attached
>>> the version used in the EL6 kernel from the kernel-2.6.32-358.6.1.el6
>>> srpm.
>>>
>>> Any ideas on where to go from here?
>>
>> I thought you were using the 3.8.x kernel inside the DomU also, if you
>> are not using it, then it's normal that there's no speed difference, you
>> have a Dom0 kernel that supports indirect descriptors, but your DomU
>> doesn't. You must use a kernel that supports indirect descriptors in
>> both Dom0 and DomU in order to make use of this feature.
>
> Ahhh - sorry - I should have been clearer - The Dom0 is kernel 3.8.x
> (3.8.8 right now) - however the DomUs are all stock EL6 kernels.
>
> Hmmmm - I believe the kernel I build for Dom0 *should* work as a DomU.
> I'll do some more experimentation and see if I can get it working
> properly as a DomU kernel.
Ok - now for testing the basics.
Same tests using vanilla 3.8.8 kernel:
# dd if=/dev/zero of=output.zero bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 37.1206 s, 57.9 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 2.65 0.00 0.22 97.13
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdf 395.81 2201.32 27.59 156.95 1.65 9.21
120.60 1.13 6.12 1.19 22.05
sde 404.86 2208.83 28.04 157.40 1.69 9.24
120.77 1.32 7.15 1.31 24.24
sdc 435.54 2174.83 30.68 155.63 1.82 9.10
120.09 0.97 5.20 1.11 20.64
sdd 388.96 2177.26 26.71 155.41 1.62 9.11
120.74 1.10 6.01 1.30 23.60
md2 0.00 0.00 0.00 537.31 0.00 17.59
67.05 0.00 0.00 0.00 0.00
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 25.3928 s, 84.6 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.22 0.00 15.74 0.00 0.22 83.81
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdf 336.81 12659.65 10.86 232.59 1.36 50.36
435.06 1.00 4.09 2.51 61.06
sde 1684.04 11000.22 54.32 189.14 6.79 43.71
424.80 1.45 5.95 3.50 85.28
sdc 144.35 11177.61 4.66 238.80 0.58 44.60
380.04 0.41 1.70 1.07 26.08
sdd 20.62 12876.50 0.67 242.79 0.08 51.25
431.80 0.45 1.84 1.15 27.92
md2 0.00 0.00 0.00 2680.71 0.00 86.47
66.06 0.00 0.00 0.00 0.00
Installed and rebooted into the patched version I build by just copying
the blkback files across to the 3.8.8 tree and building:
# dd if=/dev/zero of=output.zero bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 45.2376 s, 47.5 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 1.35 0.00 0.45 98.19
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdd 1340.80 5806.80 158.20 674.40 5.83 25.27
76.51 6.00 7.16 0.80 66.90
sde 1334.60 5894.00 160.80 686.40 5.86 25.71
76.32 6.87 8.11 0.87 73.52
sdc 1330.80 5858.20 158.00 682.40 5.86 25.60
76.67 5.71 6.81 0.77 64.84
sdf 1341.00 5848.80 157.00 681.20 5.83 25.49
76.53 6.23 7.38 0.85 70.92
md2 0.00 0.00 0.00 1431.40 0.00 46.83
67.01 0.00 0.00 0.00 0.00
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 38.9052 s, 55.2 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 5.27 0.00 0.32 94.41
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdd 493.20 8481.60 36.80 335.80 2.07 34.45
200.73 1.14 3.07 0.97 36.32
sde 1371.60 7380.20 83.80 304.80 5.66 30.08
188.34 2.20 5.65 1.94 75.38
sdc 540.20 7556.80 56.00 326.20 2.33 30.80
177.52 1.49 3.90 1.26 48.02
sdf 734.20 8286.60 64.40 326.20 3.12 33.67
192.92 1.66 4.24 1.45 56.66
md2 0.00 0.00 0.00 1835.20 0.00 59.20
66.06 0.00 0.00 0.00 0.00
That is with the same kernel running on both Dom0 and DomU.
In the dmesg of the DomU, I see the following:
blkfront: xvdb: flush diskcache: enabled using persistent grants
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-27 8:35 ` Steven Haigh
@ 2013-04-29 8:38 ` Roger Pau Monné
2013-04-29 19:26 ` Steven Haigh
0 siblings, 1 reply; 25+ messages in thread
From: Roger Pau Monné @ 2013-04-29 8:38 UTC (permalink / raw)
To: Steven Haigh; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On 27/04/13 10:35, Steven Haigh wrote:
> On 27/04/2013 5:51 PM, Steven Haigh wrote:
>> On 27/04/2013 5:06 PM, Roger Pau Monné wrote:
>>> On 27/04/13 03:57, Steven Haigh wrote:
>>>> On 27/04/2013 12:16 AM, Steven Haigh wrote:
>>>>> On 27/04/2013 12:06 AM, Roger Pau Monné wrote:
>>>>>> On 23/04/13 21:05, Steven Haigh wrote:
>>>>>>> Sorry - resending this to Felipe as well - as I started talking to
>>>>>>> him
>>>>>>> directly previously.
>>>>>>>
>>>>>>> Felipe, to bring you up to date, I've copied over the blkback
>>>>>>> files from
>>>>>>> Rogers indirect kernel over the vanilla 3.8.8 kernel files, built and
>>>>>>> tested. Results below:
>>>>>>>
>>>>>
>>>>> Bringing this into context in a nutshell - results showed about 5MB/sec
>>>>> improvement when using buffered disk access - totalling ~57MB/sec write
>>>>> speed vs ~98MB/sec when using the oflag=direct flag to dd.
>>>>>
>>>>> When talking about back porting a few indirect patches to mainline
>>>>> blkback (3.8.8 atm):
>>>>>>> On 24/04/2013 4:13 AM, Roger Pau Monné wrote:
>>>>>>>> I think it requires a non-trivial amount of work, what you could do
>>>>>>>> as a
>>>>>>>> test is directly replace the affected files with the ones in my
>>>>>>>> tree, it
>>>>>>>> is not optimal, but I don't think it's going to cause problems,
>>>>>>>> and you
>>>>>>>> could at least see if indirect descriptors solve your problem.
>>>>>>>
>>>>>>> Ok, I copied across those files, built, packaged and installed
>>>>>>> them on
>>>>>>> my Dom0. Good news is that its a little quicker, bad news is not by
>>>>>>> much.
>>>>>>
>>>>>> Could you try increasing xen_blkif_max_segments variable in
>>>>>> xen-blkfront.c to 64 or 128? It is set to 32 by default. You will only
>>>>>> need to recompile the DomU kernel after this, the Dom0 is able to
>>>>>> support up to 256 indirect segments.
>>>>>
>>>>> I'll have to look at this. All DomU's are Scientific Linux 6.4
>>>>> systems -
>>>>> so essentially RHEL6.4 and so on. I haven't built a RH kernel as yet -
>>>>> so I'll have to look at what is involved. It might be as simple as
>>>>> rebuilding a normal SRPM.
>>>>
>>>> Ok, I've had a look at the RH xen-blkfront.c - and I can't see any
>>>> definition of xen_blkif_max_segments - or anything close. I've attached
>>>> the version used in the EL6 kernel from the kernel-2.6.32-358.6.1.el6
>>>> srpm.
>>>>
>>>> Any ideas on where to go from here?
>>>
>>> I thought you were using the 3.8.x kernel inside the DomU also, if you
>>> are not using it, then it's normal that there's no speed difference, you
>>> have a Dom0 kernel that supports indirect descriptors, but your DomU
>>> doesn't. You must use a kernel that supports indirect descriptors in
>>> both Dom0 and DomU in order to make use of this feature.
>>
>> Ahhh - sorry - I should have been clearer - The Dom0 is kernel 3.8.x
>> (3.8.8 right now) - however the DomUs are all stock EL6 kernels.
>>
>> Hmmmm - I believe the kernel I build for Dom0 *should* work as a DomU.
>> I'll do some more experimentation and see if I can get it working
>> properly as a DomU kernel.
>
> Ok - now for testing the basics.
>
> Same tests using vanilla 3.8.8 kernel:
> # dd if=/dev/zero of=output.zero bs=1M count=2048
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 37.1206 s, 57.9 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.00 0.00 2.65 0.00 0.22 97.13
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdf 395.81 2201.32 27.59 156.95 1.65 9.21
> 120.60 1.13 6.12 1.19 22.05
> sde 404.86 2208.83 28.04 157.40 1.69 9.24
> 120.77 1.32 7.15 1.31 24.24
> sdc 435.54 2174.83 30.68 155.63 1.82 9.10
> 120.09 0.97 5.20 1.11 20.64
> sdd 388.96 2177.26 26.71 155.41 1.62 9.11
> 120.74 1.10 6.01 1.30 23.60
> md2 0.00 0.00 0.00 537.31 0.00 17.59
> 67.05 0.00 0.00 0.00 0.00
>
> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 25.3928 s, 84.6 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.22 0.00 15.74 0.00 0.22 83.81
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdf 336.81 12659.65 10.86 232.59 1.36 50.36
> 435.06 1.00 4.09 2.51 61.06
> sde 1684.04 11000.22 54.32 189.14 6.79 43.71
> 424.80 1.45 5.95 3.50 85.28
> sdc 144.35 11177.61 4.66 238.80 0.58 44.60
> 380.04 0.41 1.70 1.07 26.08
> sdd 20.62 12876.50 0.67 242.79 0.08 51.25
> 431.80 0.45 1.84 1.15 27.92
> md2 0.00 0.00 0.00 2680.71 0.00 86.47
> 66.06 0.00 0.00 0.00 0.00
>
> Installed and rebooted into the patched version I build by just copying
> the blkback files across to the 3.8.8 tree and building:
Did you also copy xen-blkfront?
>
> # dd if=/dev/zero of=output.zero bs=1M count=2048
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 45.2376 s, 47.5 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.00 0.00 1.35 0.00 0.45 98.19
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdd 1340.80 5806.80 158.20 674.40 5.83 25.27
> 76.51 6.00 7.16 0.80 66.90
> sde 1334.60 5894.00 160.80 686.40 5.86 25.71
> 76.32 6.87 8.11 0.87 73.52
> sdc 1330.80 5858.20 158.00 682.40 5.86 25.60
> 76.67 5.71 6.81 0.77 64.84
> sdf 1341.00 5848.80 157.00 681.20 5.83 25.49
> 76.53 6.23 7.38 0.85 70.92
> md2 0.00 0.00 0.00 1431.40 0.00 46.83
> 67.01 0.00 0.00 0.00 0.00
>
> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 38.9052 s, 55.2 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.00 0.00 5.27 0.00 0.32 94.41
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdd 493.20 8481.60 36.80 335.80 2.07 34.45
> 200.73 1.14 3.07 0.97 36.32
> sde 1371.60 7380.20 83.80 304.80 5.66 30.08
> 188.34 2.20 5.65 1.94 75.38
> sdc 540.20 7556.80 56.00 326.20 2.33 30.80
> 177.52 1.49 3.90 1.26 48.02
> sdf 734.20 8286.60 64.40 326.20 3.12 33.67
> 192.92 1.66 4.24 1.45 56.66
> md2 0.00 0.00 0.00 1835.20 0.00 59.20
> 66.06 0.00 0.00 0.00 0.00
>
> That is with the same kernel running on both Dom0 and DomU.
>
> In the dmesg of the DomU, I see the following:
> blkfront: xvdb: flush diskcache: enabled using persistent grants
It seems you are missing some pieces, you should see something like:
blkfront: xvda: flush diskcache: enabled; persistent grants: enabled;
indirect descriptors: enabled;
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-29 8:38 ` Roger Pau Monné
@ 2013-04-29 19:26 ` Steven Haigh
2013-04-29 19:47 ` Steven Haigh
0 siblings, 1 reply; 25+ messages in thread
From: Steven Haigh @ 2013-04-29 19:26 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On 29/04/2013 6:38 PM, Roger Pau Monné wrote:
> Did you also copy xen-blkfront?
Dammit! No, no I didn't. I tried to just copy this back over to the
3.8.8 and 3.8.10 kernel versions, but it came up with too many errors -
so I just rebuilt/packages the checkout of your git based on 3.8.0-rc7.
> It seems you are missing some pieces, you should see something like:
>
> blkfront: xvda: flush diskcache: enabled; persistent grants: enabled;
> indirect descriptors: enabled;
Now I'm running 3.8.0-rc7 from your git on both DomU and Dom0. In the
DomU, I now see:
blkfront: xvda: flush diskcache: enabled; persistent grants: enabled;
indirect descriptors: enabled;
From what you say, this should be what I'd expect.
From the DomU:
# dd if=/dev/zero of=output.zero bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 32.9252 s, 65.2 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.23 0.00 9.61 0.00 0.46 89.70
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdd 1071.40 7914.87 67.28 479.18 4.40 32.64
138.82 17.45 31.65 2.00 109.36
sde 1151.72 7943.71 68.65 486.73 4.79 33.20
140.10 13.18 23.87 1.93 107.14
sdc 1123.34 7921.05 66.36 482.84 4.66 32.86
139.89 8.80 15.96 1.86 102.31
sdf 1091.53 7937.30 70.02 483.30 4.54 32.97
138.84 18.98 34.31 1.98 109.45
md2 0.00 0.00 0.00 1003.66 0.00 65.31
133.27 0.00 0.00 0.00 0.00
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.22 0.00 10.94 0.00 0.22 88.62
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdd 20.35 13703.72 1.75 258.64 0.10 54.54
429.75 0.47 1.81 1.13 29.34
sde 1858.64 11655.36 61.05 199.56 7.51 46.36
423.36 1.54 5.89 3.27 85.27
sdc 142.45 11824.07 5.47 254.70 0.59 47.18
376.03 0.42 1.61 1.02 26.59
sdf 332.39 13489.72 11.38 248.80 1.35 53.72
433.47 1.06 4.10 2.50 65.16
md2 0.00 0.00 3.72 733.48 0.06 91.68
254.86 0.00 0.00 0.00 0.00
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-29 19:26 ` Steven Haigh
@ 2013-04-29 19:47 ` Steven Haigh
2013-04-30 10:07 ` Felipe Franciosi
0 siblings, 1 reply; 25+ messages in thread
From: Steven Haigh @ 2013-04-29 19:47 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On 30/04/2013 5:26 AM, Steven Haigh wrote:
> On 29/04/2013 6:38 PM, Roger Pau Monné wrote:
>> Did you also copy xen-blkfront?
>
> Dammit! No, no I didn't. I tried to just copy this back over to the
> 3.8.8 and 3.8.10 kernel versions, but it came up with too many errors -
> so I just rebuilt/packages the checkout of your git based on 3.8.0-rc7.
>
>> It seems you are missing some pieces, you should see something like:
>>
>> blkfront: xvda: flush diskcache: enabled; persistent grants: enabled;
>> indirect descriptors: enabled;
>
> Now I'm running 3.8.0-rc7 from your git on both DomU and Dom0. In the
> DomU, I now see:
>
> blkfront: xvda: flush diskcache: enabled; persistent grants: enabled;
> indirect descriptors: enabled;
>
> From what you say, this should be what I'd expect.
>
> From the DomU:
> # dd if=/dev/zero of=output.zero bs=1M count=2048
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 32.9252 s, 65.2 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.23 0.00 9.61 0.00 0.46 89.70
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdd 1071.40 7914.87 67.28 479.18 4.40 32.64
> 138.82 17.45 31.65 2.00 109.36
> sde 1151.72 7943.71 68.65 486.73 4.79 33.20
> 140.10 13.18 23.87 1.93 107.14
> sdc 1123.34 7921.05 66.36 482.84 4.66 32.86
> 139.89 8.80 15.96 1.86 102.31
> sdf 1091.53 7937.30 70.02 483.30 4.54 32.97
> 138.84 18.98 34.31 1.98 109.45
> md2 0.00 0.00 0.00 1003.66 0.00 65.31
> 133.27 0.00 0.00 0.00 0.00
>
> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.22 0.00 10.94 0.00 0.22 88.62
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdd 20.35 13703.72 1.75 258.64 0.10 54.54
> 429.75 0.47 1.81 1.13 29.34
> sde 1858.64 11655.36 61.05 199.56 7.51 46.36
> 423.36 1.54 5.89 3.27 85.27
> sdc 142.45 11824.07 5.47 254.70 0.59 47.18
> 376.03 0.42 1.61 1.02 26.59
> sdf 332.39 13489.72 11.38 248.80 1.35 53.72
> 433.47 1.06 4.10 2.50 65.16
> md2 0.00 0.00 3.72 733.48 0.06 91.68
> 254.86 0.00 0.00 0.00 0.00
I just thought - I should probably include a baseline by mounting the
same LV in the Dom0 and doing the exact same tests.
# dd if=/dev/zero of=output.zero bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 19.1554 s, 112 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 23.18 76.60 0.22 0.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdd 139.07 14785.43 11.92 286.98 0.59 58.88
407.50 2.60 8.71 1.84 54.92
sde 83.44 14846.58 8.39 292.05 0.36 59.09
405.23 4.12 13.69 2.56 76.84
sdc 98.23 14828.04 9.93 289.18 0.42 58.84
405.73 2.55 8.45 1.75 52.43
sdf 77.04 14816.78 8.61 289.40 0.33 58.96
407.51 3.89 13.05 2.52 75.14
md2 0.00 0.00 0.00 973.51 0.00 116.72
245.55 0.00 0.00 0.00 0.00
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 12.22 87.58 0.21 0.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdd 32.09 12310.14 1.04 291.10 0.13 49.22
345.99 0.48 1.66 0.91 26.71
sde 1225.88 9931.88 39.54 224.84 4.94 39.70
345.81 1.20 4.53 2.44 64.55
sdc 19.25 11116.15 0.62 266.05 0.08 44.46
342.06 0.41 1.53 0.86 22.94
sdf 1206.63 11122.77 38.92 253.21 4.87 44.51
346.17 1.39 4.78 2.46 71.97
md2 0.00 0.00 0.00 634.37 0.00 79.30
256.00 0.00 0.00 0.00 0.00
This is running the same kernel - 3.8.0-rc7 from your git.
And also for the sake completeness, the Dom0 grub.conf:
title Scientific Linux (3.8.0-1.el6xen.x86_64)
root (hd0,0)
kernel /xen.gz dom0_mem=1024M cpufreq=xen dom0_max_vcpus=1
dom0_vcpus_pin
module /vmlinuz-3.8.0-1.el6xen.x86_64 ro
root=/dev/vg_raid1/xenhost rd_LVM_LV=vg_raid1/xenhost
rd_MD_UUID=afb92c19:b9b1e3ae:07af315d:738e38be rd_NO_LUKS rd_NO_DM
LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us
crashkernel=auto quiet panic=5 i915.i915_enable_rc6=7
i915.i915_enable_fbc=1 i915.lvds_downclock=1 drm.vblankoffdelay=1
module /initramfs-3.8.0-1.el6xen.x86_64.img
and the DomU config:
# cat /etc/xen/zeus.vm
name = "zeus.vm"
memory = 1024
vcpus = 2
cpus = "1-3"
disk = [ 'phy:/dev/vg_raid1/zeus.vm,xvda,w' ,
'phy:/dev/md2,xvdb,w' ]
vif = [ "mac=02:16:36:35:35:09, bridge=br203,
vifname=vm.zeus.203", "mac=10:16:36:35:35:09, bridge=br10,
vifname=vm.zeus.10" ]
bootloader = "pygrub"
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
All the tests are being done on /dev/md2 (from Dom0) presented as xvdb
on the DomU.
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md2 : active raid6 sdd[5] sdc[4] sdf[1] sde[0]
3906766592 blocks super 1.2 level 6, 128k chunk, algorithm 2
[4/4] [UUUU]
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-29 19:47 ` Steven Haigh
@ 2013-04-30 10:07 ` Felipe Franciosi
2013-04-30 10:38 ` Steven Haigh
2013-05-08 8:20 ` Steven Haigh
0 siblings, 2 replies; 25+ messages in thread
From: Felipe Franciosi @ 2013-04-30 10:07 UTC (permalink / raw)
To: 'Steven Haigh'; +Cc: Roger Pau Monne, xen-devel@lists.xen.org
I noticed you copied your results from "dd", but I didn't see any conclusions drawn from experiment.
Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT?
domU:
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
dom0:
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn't perform well on Xen's PV protocol.
Cheers,
Felipe
-----Original Message-----
From: Steven Haigh [mailto:netwiz@crc.id.au]
Sent: 29 April 2013 20:48
To: Roger Pau Monne
Cc: Felipe Franciosi; xen-devel@lists.xen.org
Subject: Re: IO speed limited by size of IO request (for RBD driver)
On 30/04/2013 5:26 AM, Steven Haigh wrote:
> On 29/04/2013 6:38 PM, Roger Pau Monné wrote:
>> Did you also copy xen-blkfront?
>
> Dammit! No, no I didn't. I tried to just copy this back over to the
> 3.8.8 and 3.8.10 kernel versions, but it came up with too many errors
> - so I just rebuilt/packages the checkout of your git based on 3.8.0-rc7.
>
>> It seems you are missing some pieces, you should see something like:
>>
>> blkfront: xvda: flush diskcache: enabled; persistent grants: enabled;
>> indirect descriptors: enabled;
>
> Now I'm running 3.8.0-rc7 from your git on both DomU and Dom0. In the
> DomU, I now see:
>
> blkfront: xvda: flush diskcache: enabled; persistent grants: enabled;
> indirect descriptors: enabled;
>
> From what you say, this should be what I'd expect.
>
> From the DomU:
> # dd if=/dev/zero of=output.zero bs=1M count=2048
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 32.9252 s, 65.2 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.23 0.00 9.61 0.00 0.46 89.70
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdd 1071.40 7914.87 67.28 479.18 4.40 32.64
> 138.82 17.45 31.65 2.00 109.36
> sde 1151.72 7943.71 68.65 486.73 4.79 33.20
> 140.10 13.18 23.87 1.93 107.14
> sdc 1123.34 7921.05 66.36 482.84 4.66 32.86
> 139.89 8.80 15.96 1.86 102.31
> sdf 1091.53 7937.30 70.02 483.30 4.54 32.97
> 138.84 18.98 34.31 1.98 109.45
> md2 0.00 0.00 0.00 1003.66 0.00 65.31
> 133.27 0.00 0.00 0.00 0.00
>
> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.22 0.00 10.94 0.00 0.22 88.62
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdd 20.35 13703.72 1.75 258.64 0.10 54.54
> 429.75 0.47 1.81 1.13 29.34
> sde 1858.64 11655.36 61.05 199.56 7.51 46.36
> 423.36 1.54 5.89 3.27 85.27
> sdc 142.45 11824.07 5.47 254.70 0.59 47.18
> 376.03 0.42 1.61 1.02 26.59
> sdf 332.39 13489.72 11.38 248.80 1.35 53.72
> 433.47 1.06 4.10 2.50 65.16
> md2 0.00 0.00 3.72 733.48 0.06 91.68
> 254.86 0.00 0.00 0.00 0.00
I just thought - I should probably include a baseline by mounting the same LV in the Dom0 and doing the exact same tests.
# dd if=/dev/zero of=output.zero bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 19.1554 s, 112 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 23.18 76.60 0.22 0.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdd 139.07 14785.43 11.92 286.98 0.59 58.88
407.50 2.60 8.71 1.84 54.92
sde 83.44 14846.58 8.39 292.05 0.36 59.09
405.23 4.12 13.69 2.56 76.84
sdc 98.23 14828.04 9.93 289.18 0.42 58.84
405.73 2.55 8.45 1.75 52.43
sdf 77.04 14816.78 8.61 289.40 0.33 58.96
407.51 3.89 13.05 2.52 75.14
md2 0.00 0.00 0.00 973.51 0.00 116.72
245.55 0.00 0.00 0.00 0.00
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 12.22 87.58 0.21 0.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdd 32.09 12310.14 1.04 291.10 0.13 49.22
345.99 0.48 1.66 0.91 26.71
sde 1225.88 9931.88 39.54 224.84 4.94 39.70
345.81 1.20 4.53 2.44 64.55
sdc 19.25 11116.15 0.62 266.05 0.08 44.46
342.06 0.41 1.53 0.86 22.94
sdf 1206.63 11122.77 38.92 253.21 4.87 44.51
346.17 1.39 4.78 2.46 71.97
md2 0.00 0.00 0.00 634.37 0.00 79.30
256.00 0.00 0.00 0.00 0.00
This is running the same kernel - 3.8.0-rc7 from your git.
And also for the sake completeness, the Dom0 grub.conf:
title Scientific Linux (3.8.0-1.el6xen.x86_64)
root (hd0,0)
kernel /xen.gz dom0_mem=1024M cpufreq=xen dom0_max_vcpus=1 dom0_vcpus_pin
module /vmlinuz-3.8.0-1.el6xen.x86_64 ro root=/dev/vg_raid1/xenhost rd_LVM_LV=vg_raid1/xenhost rd_MD_UUID=afb92c19:b9b1e3ae:07af315d:738e38be rd_NO_LUKS rd_NO_DM
LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto quiet panic=5 i915.i915_enable_rc6=7
i915.i915_enable_fbc=1 i915.lvds_downclock=1 drm.vblankoffdelay=1
module /initramfs-3.8.0-1.el6xen.x86_64.img
and the DomU config:
# cat /etc/xen/zeus.vm
name = "zeus.vm"
memory = 1024
vcpus = 2
cpus = "1-3"
disk = [ 'phy:/dev/vg_raid1/zeus.vm,xvda,w' ,
'phy:/dev/md2,xvdb,w' ]
vif = [ "mac=02:16:36:35:35:09, bridge=br203,
vifname=vm.zeus.203", "mac=10:16:36:35:35:09, bridge=br10, vifname=vm.zeus.10" ]
bootloader = "pygrub"
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
All the tests are being done on /dev/md2 (from Dom0) presented as xvdb on the DomU.
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md2 : active raid6 sdd[5] sdc[4] sdf[1] sde[0]
3906766592 blocks super 1.2 level 6, 128k chunk, algorithm 2 [4/4] [UUUU]
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-30 10:07 ` Felipe Franciosi
@ 2013-04-30 10:38 ` Steven Haigh
2013-05-08 8:20 ` Steven Haigh
1 sibling, 0 replies; 25+ messages in thread
From: Steven Haigh @ 2013-04-30 10:38 UTC (permalink / raw)
To: Felipe Franciosi; +Cc: Roger Pau Monne, xen-devel@lists.xen.org
On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
> I noticed you copied your results from "dd", but I didn't see any conclusions drawn from experiment.
>
> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT?
>
> domU:
> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>
> dom0:
> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
>
>
> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn't perform well on Xen's PV protocol.
Exactly right. The direct write speeds are close enough to native to be
perfect. The problem is when *not* using direct mode. You can see from
the results when I don't pass the oflag=direct to dd, the write speeds
are 112MB/sec from the Dom0 and 65MB/sec from the DomU.
As just about every other method to write on the DomU doesn't use
direct, this becomes the normal write speeds.
>
> Cheers,
> Felipe
>
> -----Original Message-----
> From: Steven Haigh [mailto:netwiz@crc.id.au]
> Sent: 29 April 2013 20:48
> To: Roger Pau Monne
> Cc: Felipe Franciosi; xen-devel@lists.xen.org
> Subject: Re: IO speed limited by size of IO request (for RBD driver)
>
> On 30/04/2013 5:26 AM, Steven Haigh wrote:
>> On 29/04/2013 6:38 PM, Roger Pau Monné wrote:
>>> Did you also copy xen-blkfront?
>>
>> Dammit! No, no I didn't. I tried to just copy this back over to the
>> 3.8.8 and 3.8.10 kernel versions, but it came up with too many errors
>> - so I just rebuilt/packages the checkout of your git based on 3.8.0-rc7.
>>
>>> It seems you are missing some pieces, you should see something like:
>>>
>>> blkfront: xvda: flush diskcache: enabled; persistent grants: enabled;
>>> indirect descriptors: enabled;
>>
>> Now I'm running 3.8.0-rc7 from your git on both DomU and Dom0. In the
>> DomU, I now see:
>>
>> blkfront: xvda: flush diskcache: enabled; persistent grants: enabled;
>> indirect descriptors: enabled;
>>
>> From what you say, this should be what I'd expect.
>>
>> From the DomU:
>> # dd if=/dev/zero of=output.zero bs=1M count=2048
>> 2048+0 records in
>> 2048+0 records out
>> 2147483648 bytes (2.1 GB) copied, 32.9252 s, 65.2 MB/s
>>
>> avg-cpu: %user %nice %system %iowait %steal %idle
>> 0.23 0.00 9.61 0.00 0.46 89.70
>>
>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
>> avgrq-sz avgqu-sz await svctm %util
>> sdd 1071.40 7914.87 67.28 479.18 4.40 32.64
>> 138.82 17.45 31.65 2.00 109.36
>> sde 1151.72 7943.71 68.65 486.73 4.79 33.20
>> 140.10 13.18 23.87 1.93 107.14
>> sdc 1123.34 7921.05 66.36 482.84 4.66 32.86
>> 139.89 8.80 15.96 1.86 102.31
>> sdf 1091.53 7937.30 70.02 483.30 4.54 32.97
>> 138.84 18.98 34.31 1.98 109.45
>> md2 0.00 0.00 0.00 1003.66 0.00 65.31
>> 133.27 0.00 0.00 0.00 0.00
>>
>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>> 2048+0 records in
>> 2048+0 records out
>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>>
>> avg-cpu: %user %nice %system %iowait %steal %idle
>> 0.22 0.00 10.94 0.00 0.22 88.62
>>
>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
>> avgrq-sz avgqu-sz await svctm %util
>> sdd 20.35 13703.72 1.75 258.64 0.10 54.54
>> 429.75 0.47 1.81 1.13 29.34
>> sde 1858.64 11655.36 61.05 199.56 7.51 46.36
>> 423.36 1.54 5.89 3.27 85.27
>> sdc 142.45 11824.07 5.47 254.70 0.59 47.18
>> 376.03 0.42 1.61 1.02 26.59
>> sdf 332.39 13489.72 11.38 248.80 1.35 53.72
>> 433.47 1.06 4.10 2.50 65.16
>> md2 0.00 0.00 3.72 733.48 0.06 91.68
>> 254.86 0.00 0.00 0.00 0.00
>
> I just thought - I should probably include a baseline by mounting the same LV in the Dom0 and doing the exact same tests.
>
> # dd if=/dev/zero of=output.zero bs=1M count=2048
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 19.1554 s, 112 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.00 0.00 23.18 76.60 0.22 0.00
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdd 139.07 14785.43 11.92 286.98 0.59 58.88
> 407.50 2.60 8.71 1.84 54.92
> sde 83.44 14846.58 8.39 292.05 0.36 59.09
> 405.23 4.12 13.69 2.56 76.84
> sdc 98.23 14828.04 9.93 289.18 0.42 58.84
> 405.73 2.55 8.45 1.75 52.43
> sdf 77.04 14816.78 8.61 289.40 0.33 58.96
> 407.51 3.89 13.05 2.52 75.14
> md2 0.00 0.00 0.00 973.51 0.00 116.72
> 245.55 0.00 0.00 0.00 0.00
>
> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.00 0.00 12.22 87.58 0.21 0.00
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdd 32.09 12310.14 1.04 291.10 0.13 49.22
> 345.99 0.48 1.66 0.91 26.71
> sde 1225.88 9931.88 39.54 224.84 4.94 39.70
> 345.81 1.20 4.53 2.44 64.55
> sdc 19.25 11116.15 0.62 266.05 0.08 44.46
> 342.06 0.41 1.53 0.86 22.94
> sdf 1206.63 11122.77 38.92 253.21 4.87 44.51
> 346.17 1.39 4.78 2.46 71.97
> md2 0.00 0.00 0.00 634.37 0.00 79.30
> 256.00 0.00 0.00 0.00 0.00
>
> This is running the same kernel - 3.8.0-rc7 from your git.
>
> And also for the sake completeness, the Dom0 grub.conf:
> title Scientific Linux (3.8.0-1.el6xen.x86_64)
> root (hd0,0)
> kernel /xen.gz dom0_mem=1024M cpufreq=xen dom0_max_vcpus=1 dom0_vcpus_pin
> module /vmlinuz-3.8.0-1.el6xen.x86_64 ro root=/dev/vg_raid1/xenhost rd_LVM_LV=vg_raid1/xenhost rd_MD_UUID=afb92c19:b9b1e3ae:07af315d:738e38be rd_NO_LUKS rd_NO_DM
> LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto quiet panic=5 i915.i915_enable_rc6=7
> i915.i915_enable_fbc=1 i915.lvds_downclock=1 drm.vblankoffdelay=1
> module /initramfs-3.8.0-1.el6xen.x86_64.img
>
> and the DomU config:
> # cat /etc/xen/zeus.vm
> name = "zeus.vm"
> memory = 1024
> vcpus = 2
> cpus = "1-3"
> disk = [ 'phy:/dev/vg_raid1/zeus.vm,xvda,w' ,
> 'phy:/dev/md2,xvdb,w' ]
> vif = [ "mac=02:16:36:35:35:09, bridge=br203,
> vifname=vm.zeus.203", "mac=10:16:36:35:35:09, bridge=br10, vifname=vm.zeus.10" ]
> bootloader = "pygrub"
>
> on_poweroff = 'destroy'
> on_reboot = 'restart'
> on_crash = 'restart'
>
> All the tests are being done on /dev/md2 (from Dom0) presented as xvdb on the DomU.
> # cat /proc/mdstat
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md2 : active raid6 sdd[5] sdc[4] sdf[1] sde[0]
> 3906766592 blocks super 1.2 level 6, 128k chunk, algorithm 2 [4/4] [UUUU]
>
> --
> Steven Haigh
>
> Email: netwiz@crc.id.au
> Web: https://www.crc.id.au
> Phone: (03) 9001 6090 - 0412 935 897
> Fax: (03) 8338 0299
>
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-04-30 10:07 ` Felipe Franciosi
2013-04-30 10:38 ` Steven Haigh
@ 2013-05-08 8:20 ` Steven Haigh
2013-05-08 8:33 ` Roger Pau Monné
1 sibling, 1 reply; 25+ messages in thread
From: Steven Haigh @ 2013-05-08 8:20 UTC (permalink / raw)
To: Felipe Franciosi; +Cc: Roger Pau Monne, xen-devel@lists.xen.org
On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
> I noticed you copied your results from "dd", but I didn't see any conclusions drawn from experiment.
>
> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT?
>
> domU:
> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>
> dom0:
> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
>
>
> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn't perform well on Xen's PV protocol.
Just wondering if there is any further input on this... While DIRECT
writes are as good as can be expected, NON-DIRECT writes in certain
cases (specifically with a mdadm raid in the Dom0) are affected by about
a 50% loss in throughput...
The hard part is that this is the default mode of writing!
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-05-08 8:20 ` Steven Haigh
@ 2013-05-08 8:33 ` Roger Pau Monné
2013-05-08 8:47 ` Steven Haigh
2013-05-08 10:32 ` Steven Haigh
0 siblings, 2 replies; 25+ messages in thread
From: Roger Pau Monné @ 2013-05-08 8:33 UTC (permalink / raw)
To: Steven Haigh; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On 08/05/13 10:20, Steven Haigh wrote:
> On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
>> I noticed you copied your results from "dd", but I didn't see any conclusions drawn from experiment.
>>
>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT?
>>
>> domU:
>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>> 2048+0 records in
>> 2048+0 records out
>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>>
>> dom0:
>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>> 2048+0 records in
>> 2048+0 records out
>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
>>
>>
>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn't perform well on Xen's PV protocol.
>
> Just wondering if there is any further input on this... While DIRECT
> writes are as good as can be expected, NON-DIRECT writes in certain
> cases (specifically with a mdadm raid in the Dom0) are affected by about
> a 50% loss in throughput...
>
> The hard part is that this is the default mode of writing!
As another test with indirect descriptors, could you change
xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default),
recompile the DomU kernel and see if that helps?
Thanks, Roger.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-05-08 8:33 ` Roger Pau Monné
@ 2013-05-08 8:47 ` Steven Haigh
2013-05-08 10:32 ` Steven Haigh
1 sibling, 0 replies; 25+ messages in thread
From: Steven Haigh @ 2013-05-08 8:47 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On 8/05/2013 6:33 PM, Roger Pau Monné wrote:
> On 08/05/13 10:20, Steven Haigh wrote:
>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
>>> I noticed you copied your results from "dd", but I didn't see any conclusions drawn from experiment.
>>>
>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT?
>>>
>>> domU:
>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>> 2048+0 records in
>>> 2048+0 records out
>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>>>
>>> dom0:
>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>> 2048+0 records in
>>> 2048+0 records out
>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
>>>
>>>
>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn't perform well on Xen's PV protocol.
>>
>> Just wondering if there is any further input on this... While DIRECT
>> writes are as good as can be expected, NON-DIRECT writes in certain
>> cases (specifically with a mdadm raid in the Dom0) are affected by about
>> a 50% loss in throughput...
>>
>> The hard part is that this is the default mode of writing!
>
> As another test with indirect descriptors, could you change
> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default),
> recompile the DomU kernel and see if that helps?
Ok, I'll get onto this...
One thing I thought I'd try - as the RAID6 is only assembled in the Dom0
then passed to the DomU as /dev/md2 - I wondered what would happen if I
passed all the member drives directly to the DomU and let the DomU take
care of the RAID6 info...
So - I changed the DomU config as such:
disk = [ 'phy:/dev/vg_raid1/zeus.vm,xvda,w' ,
'phy:/dev/sdc,xvdc,w' , 'phy:/dev/sdd,xvdd,w' , 'phy:/dev/sde,xvde,w' ,
'phy:/dev/sdf,xvdf,w' ]
I then assembled the RAID6 on the DomU using mdadm:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : active raid6 xvdf[1] xvde[0] xvdd[5] xvdc[4]
3906766592 blocks super 1.2 level 6, 128k chunk, algorithm 2
[4/4] [UUUU]
# dd if=/dev/zero of=output.zero bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 35.4581 s, 60.6 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.54 0.00 11.76 0.00 0.68 87.03
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdf 0.00 0.00 16.89 2832.70 0.44 36.42
26.49 17.46 6.12 0.36 103.82
sdc 0.00 0.00 14.73 2876.49 0.39 36.36
26.03 19.57 6.77 0.38 108.50
sde 0.00 0.00 20.68 2692.70 0.50 36.40
27.85 17.97 6.62 0.40 109.07
sdd 0.00 0.00 11.76 2846.22 0.35 36.36
26.30 19.36 6.76 0.37 106.14
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 53.4774 s, 40.2 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.49 0.00 14.64 0.00 0.62 84.26
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdf 0.00 0.00 614.88 1382.90 5.08 21.85
27.61 10.12 5.07 0.39 77.70
sdc 0.00 0.00 16.73 2800.86 0.09 26.46
19.30 13.51 4.79 0.28 77.64
sde 0.00 0.00 25.95 2762.24 0.19 21.76
16.12 3.04 1.09 0.12 32.76
sdb 0.00 0.00 0.00 1.97 0.00 0.01
5.75 0.01 7.00 6.63 1.30
sdd 0.00 0.00 6.03 2831.61 0.02 26.62
19.23 14.11 5.01 0.28 80.58
Interesting that doing this destroys the direct writing - however
doesn't seem to affect the non-direct. (As a side note, this is using
the stock EL6 kernel as DomU and vanilla 3.8.10 as the Dom0.
Will do the other research now...
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-05-08 8:33 ` Roger Pau Monné
2013-05-08 8:47 ` Steven Haigh
@ 2013-05-08 10:32 ` Steven Haigh
2013-05-08 10:45 ` Roger Pau Monné
1 sibling, 1 reply; 25+ messages in thread
From: Steven Haigh @ 2013-05-08 10:32 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On 8/05/2013 6:33 PM, Roger Pau Monné wrote:
> On 08/05/13 10:20, Steven Haigh wrote:
>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
>>> I noticed you copied your results from "dd", but I didn't see any conclusions drawn from experiment.
>>>
>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT?
>>>
>>> domU:
>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>> 2048+0 records in
>>> 2048+0 records out
>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>>>
>>> dom0:
>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>> 2048+0 records in
>>> 2048+0 records out
>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
>>>
>>>
>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn't perform well on Xen's PV protocol.
>>
>> Just wondering if there is any further input on this... While DIRECT
>> writes are as good as can be expected, NON-DIRECT writes in certain
>> cases (specifically with a mdadm raid in the Dom0) are affected by about
>> a 50% loss in throughput...
>>
>> The hard part is that this is the default mode of writing!
>
> As another test with indirect descriptors, could you change
> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default),
> recompile the DomU kernel and see if that helps?
Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2 is
running on both the Dom0 and DomU.
# dd if=/dev/zero of=output.zero bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.34 0.00 17.10 0.00 0.23 82.33
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdd 980.97 11936.47 53.11 429.78 4.00 48.77
223.81 12.75 26.10 2.11 101.79
sdc 872.71 11957.87 45.98 435.67 3.55 49.30
224.71 13.77 28.43 2.11 101.49
sde 949.26 11981.88 51.30 429.33 3.91 48.90
225.03 21.29 43.91 2.27 109.08
sdf 915.52 11968.52 48.58 428.88 3.73 48.92
225.84 21.44 44.68 2.27 108.56
md2 0.00 0.00 0.00 1155.61 0.00 97.51
172.80 0.00 0.00 0.00 0.00
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s
avg-cpu: %user %nice %system %iowait %steal %idle
0.11 0.00 13.92 0.00 0.22 85.75
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sdd 0.00 13986.08 0.00 263.20 0.00 55.76
433.87 0.43 1.63 1.07 28.27
sdc 202.10 13741.55 6.52 256.57 0.81 54.77
432.65 0.50 1.88 1.25 32.78
sde 47.96 11437.57 1.55 261.77 0.19 45.79
357.63 0.80 3.02 1.85 48.60
sdf 2233.37 11756.13 71.93 191.38 8.99 46.80
433.90 1.49 5.66 3.27 86.15
md2 0.00 0.00 0.00 731.93 0.00 91.49
256.00 0.00 0.00 0.00 0.00
Now this is pretty much exactly what I would expect the system to do....
~96MB/sec buffered, and 85MB/sec direct.
So - it turns out that xen_blkif_max_segments at 32 is a killer in the
DomU. Now it makes me wonder what we can do about this in kernels that
don't have your series of patches against it? And also about the backend
stuff in 3.8.x etc?
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-05-08 10:32 ` Steven Haigh
@ 2013-05-08 10:45 ` Roger Pau Monné
2013-05-08 11:14 ` Felipe Franciosi
2013-05-08 12:56 ` Steven Haigh
0 siblings, 2 replies; 25+ messages in thread
From: Roger Pau Monné @ 2013-05-08 10:45 UTC (permalink / raw)
To: Steven Haigh; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On 08/05/13 12:32, Steven Haigh wrote:
> On 8/05/2013 6:33 PM, Roger Pau Monné wrote:
>> On 08/05/13 10:20, Steven Haigh wrote:
>>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
>>>> I noticed you copied your results from "dd", but I didn't see any conclusions drawn from experiment.
>>>>
>>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT?
>>>>
>>>> domU:
>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>>> 2048+0 records in
>>>> 2048+0 records out
>>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>>>>
>>>> dom0:
>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>>> 2048+0 records in
>>>> 2048+0 records out
>>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
>>>>
>>>>
>>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn't perform well on Xen's PV protocol.
>>>
>>> Just wondering if there is any further input on this... While DIRECT
>>> writes are as good as can be expected, NON-DIRECT writes in certain
>>> cases (specifically with a mdadm raid in the Dom0) are affected by about
>>> a 50% loss in throughput...
>>>
>>> The hard part is that this is the default mode of writing!
>>
>> As another test with indirect descriptors, could you change
>> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default),
>> recompile the DomU kernel and see if that helps?
>
> Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2 is
> running on both the Dom0 and DomU.
>
> # dd if=/dev/zero of=output.zero bs=1M count=2048
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.34 0.00 17.10 0.00 0.23 82.33
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdd 980.97 11936.47 53.11 429.78 4.00 48.77
> 223.81 12.75 26.10 2.11 101.79
> sdc 872.71 11957.87 45.98 435.67 3.55 49.30
> 224.71 13.77 28.43 2.11 101.49
> sde 949.26 11981.88 51.30 429.33 3.91 48.90
> 225.03 21.29 43.91 2.27 109.08
> sdf 915.52 11968.52 48.58 428.88 3.73 48.92
> 225.84 21.44 44.68 2.27 108.56
> md2 0.00 0.00 0.00 1155.61 0.00 97.51
> 172.80 0.00 0.00 0.00 0.00
>
> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.11 0.00 13.92 0.00 0.22 85.75
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdd 0.00 13986.08 0.00 263.20 0.00 55.76
> 433.87 0.43 1.63 1.07 28.27
> sdc 202.10 13741.55 6.52 256.57 0.81 54.77
> 432.65 0.50 1.88 1.25 32.78
> sde 47.96 11437.57 1.55 261.77 0.19 45.79
> 357.63 0.80 3.02 1.85 48.60
> sdf 2233.37 11756.13 71.93 191.38 8.99 46.80
> 433.90 1.49 5.66 3.27 86.15
> md2 0.00 0.00 0.00 731.93 0.00 91.49
> 256.00 0.00 0.00 0.00 0.00
>
> Now this is pretty much exactly what I would expect the system to do....
> ~96MB/sec buffered, and 85MB/sec direct.
I'm sorry to be such a PITA, but could you also try with 64? If we have
to increase the maximum number of indirect descriptors I would like to
set it to the lowest value that provides good performance to prevent
using too much memory.
> So - it turns out that xen_blkif_max_segments at 32 is a killer in the
> DomU. Now it makes me wonder what we can do about this in kernels that
> don't have your series of patches against it? And also about the backend
> stuff in 3.8.x etc?
There isn't much we can do regarding kernels without indirect
descriptors, there's no easy way to increase the number of segments in a
request.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-05-08 10:45 ` Roger Pau Monné
@ 2013-05-08 11:14 ` Felipe Franciosi
2013-05-22 20:13 ` Konrad Rzeszutek Wilk
2013-05-08 12:56 ` Steven Haigh
1 sibling, 1 reply; 25+ messages in thread
From: Felipe Franciosi @ 2013-05-08 11:14 UTC (permalink / raw)
To: Roger Pau Monne, Steven Haigh; +Cc: xen-devel@lists.xen.org
However we didn't "prove" it properly, I think it is worth mentioning that this boils down to what we originally thought it was:
Steven's environment is writing to a filesystem in the guest. On top of that, it's using the guest's buffer cache to do the writes.
This means that we cannot (easily?) control how the cache and the fs are flushing these writes through blkfront/blkback.
In other words, it's very likely that it generates a workload that simply doesn't perform well on the "stock" PV protocol.
This is a good example of how indirect descriptors help (remembering Roger and I were struggling to find use cases where indirect descriptors showed a substantial gain).
Cheers,
Felipe
-----Original Message-----
From: Roger Pau Monne
Sent: 08 May 2013 11:45
To: Steven Haigh
Cc: Felipe Franciosi; xen-devel@lists.xen.org
Subject: Re: IO speed limited by size of IO request (for RBD driver)
On 08/05/13 12:32, Steven Haigh wrote:
> On 8/05/2013 6:33 PM, Roger Pau Monné wrote:
>> On 08/05/13 10:20, Steven Haigh wrote:
>>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
>>>> I noticed you copied your results from "dd", but I didn't see any conclusions drawn from experiment.
>>>>
>>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT?
>>>>
>>>> domU:
>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>>> 2048+0 records in
>>>> 2048+0 records out
>>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>>>>
>>>> dom0:
>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>>> 2048+0 records in
>>>> 2048+0 records out
>>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
>>>>
>>>>
>>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn't perform well on Xen's PV protocol.
>>>
>>> Just wondering if there is any further input on this... While DIRECT
>>> writes are as good as can be expected, NON-DIRECT writes in certain
>>> cases (specifically with a mdadm raid in the Dom0) are affected by
>>> about a 50% loss in throughput...
>>>
>>> The hard part is that this is the default mode of writing!
>>
>> As another test with indirect descriptors, could you change
>> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by
>> default), recompile the DomU kernel and see if that helps?
>
> Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2
> is running on both the Dom0 and DomU.
>
> # dd if=/dev/zero of=output.zero bs=1M count=2048
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.34 0.00 17.10 0.00 0.23 82.33
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdd 980.97 11936.47 53.11 429.78 4.00 48.77
> 223.81 12.75 26.10 2.11 101.79
> sdc 872.71 11957.87 45.98 435.67 3.55 49.30
> 224.71 13.77 28.43 2.11 101.49
> sde 949.26 11981.88 51.30 429.33 3.91 48.90
> 225.03 21.29 43.91 2.27 109.08
> sdf 915.52 11968.52 48.58 428.88 3.73 48.92
> 225.84 21.44 44.68 2.27 108.56
> md2 0.00 0.00 0.00 1155.61 0.00 97.51
> 172.80 0.00 0.00 0.00 0.00
>
> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.11 0.00 13.92 0.00 0.22 85.75
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await svctm %util
> sdd 0.00 13986.08 0.00 263.20 0.00 55.76
> 433.87 0.43 1.63 1.07 28.27
> sdc 202.10 13741.55 6.52 256.57 0.81 54.77
> 432.65 0.50 1.88 1.25 32.78
> sde 47.96 11437.57 1.55 261.77 0.19 45.79
> 357.63 0.80 3.02 1.85 48.60
> sdf 2233.37 11756.13 71.93 191.38 8.99 46.80
> 433.90 1.49 5.66 3.27 86.15
> md2 0.00 0.00 0.00 731.93 0.00 91.49
> 256.00 0.00 0.00 0.00 0.00
>
> Now this is pretty much exactly what I would expect the system to do....
> ~96MB/sec buffered, and 85MB/sec direct.
I'm sorry to be such a PITA, but could you also try with 64? If we have to increase the maximum number of indirect descriptors I would like to set it to the lowest value that provides good performance to prevent using too much memory.
> So - it turns out that xen_blkif_max_segments at 32 is a killer in the
> DomU. Now it makes me wonder what we can do about this in kernels that
> don't have your series of patches against it? And also about the
> backend stuff in 3.8.x etc?
There isn't much we can do regarding kernels without indirect descriptors, there's no easy way to increase the number of segments in a request.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-05-08 10:45 ` Roger Pau Monné
2013-05-08 11:14 ` Felipe Franciosi
@ 2013-05-08 12:56 ` Steven Haigh
1 sibling, 0 replies; 25+ messages in thread
From: Steven Haigh @ 2013-05-08 12:56 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On 05/08/2013 08:45 PM, Roger Pau Monné wrote:> On 08/05/13 12:32,
Steven Haigh wrote:
>> On 8/05/2013 6:33 PM, Roger Pau Monné wrote:
>>> On 08/05/13 10:20, Steven Haigh wrote:
>>>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
>>>>> I noticed you copied your results from "dd", but I didn't see any
conclusions drawn from experiment.
>>>>>
>>>>> Did I understand it wrong or now you have comparable performance
on dom0 and domU when using DIRECT?
>>>>>
>>>>> domU:
>>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>>>> 2048+0 records in
>>>>> 2048+0 records out
>>>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>>>>>
>>>>> dom0:
>>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>>>> 2048+0 records in
>>>>> 2048+0 records out
>>>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
>>>>>
>>>>>
>>>>> I think that if the performance differs when NOT using DIRECT,
the issue must be related to the way your guest is flushing the cache.
This must be generating a workload that doesn't perform well on Xen's PV
protocol.
>>>>
>>>> Just wondering if there is any further input on this... While DIRECT
>>>> writes are as good as can be expected, NON-DIRECT writes in certain
>>>> cases (specifically with a mdadm raid in the Dom0) are affected by
about
>>>> a 50% loss in throughput...
>>>>
>>>> The hard part is that this is the default mode of writing!
>>>
>>> As another test with indirect descriptors, could you change
>>> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default),
>>> recompile the DomU kernel and see if that helps?
>>
>> Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2 is
>> running on both the Dom0 and DomU.
>>
>> # dd if=/dev/zero of=output.zero bs=1M count=2048
>> 2048+0 records in
>> 2048+0 records out
>> 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s
>>
>> avg-cpu: %user %nice %system %iowait %steal %idle
>> 0.34 0.00 17.10 0.00 0.23 82.33
>>
>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
>> avgrq-sz avgqu-sz await svctm %util
>> sdd 980.97 11936.47 53.11 429.78 4.00 48.77
>> 223.81 12.75 26.10 2.11 101.79
>> sdc 872.71 11957.87 45.98 435.67 3.55 49.30
>> 224.71 13.77 28.43 2.11 101.49
>> sde 949.26 11981.88 51.30 429.33 3.91 48.90
>> 225.03 21.29 43.91 2.27 109.08
>> sdf 915.52 11968.52 48.58 428.88 3.73 48.92
>> 225.84 21.44 44.68 2.27 108.56
>> md2 0.00 0.00 0.00 1155.61 0.00 97.51
>> 172.80 0.00 0.00 0.00 0.00
>>
>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>> 2048+0 records in
>> 2048+0 records out
>> 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s
>>
>> avg-cpu: %user %nice %system %iowait %steal %idle
>> 0.11 0.00 13.92 0.00 0.22 85.75
>>
>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
>> avgrq-sz avgqu-sz await svctm %util
>> sdd 0.00 13986.08 0.00 263.20 0.00 55.76
>> 433.87 0.43 1.63 1.07 28.27
>> sdc 202.10 13741.55 6.52 256.57 0.81 54.77
>> 432.65 0.50 1.88 1.25 32.78
>> sde 47.96 11437.57 1.55 261.77 0.19 45.79
>> 357.63 0.80 3.02 1.85 48.60
>> sdf 2233.37 11756.13 71.93 191.38 8.99 46.80
>> 433.90 1.49 5.66 3.27 86.15
>> md2 0.00 0.00 0.00 731.93 0.00 91.49
>> 256.00 0.00 0.00 0.00 0.00
>>
>> Now this is pretty much exactly what I would expect the system to do....
>> ~96MB/sec buffered, and 85MB/sec direct.
>
> I'm sorry to be such a PITA, but could you also try with 64? If we have
> to increase the maximum number of indirect descriptors I would like to
> set it to the lowest value that provides good performance to prevent
> using too much memory.
Compiled with 64:
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 25.2078 s, 85.2 MB/s
# dd if=/dev/zero of=output.zero bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 22.0265 s, 97.5 MB/s
>> So - it turns out that xen_blkif_max_segments at 32 is a killer in the
>> DomU. Now it makes me wonder what we can do about this in kernels that
>> don't have your series of patches against it? And also about the backend
>> stuff in 3.8.x etc?
>
> There isn't much we can do regarding kernels without indirect
> descriptors, there's no easy way to increase the number of segments in a
> request.
I wonder if this is something that could go into vanilla kernel 3.9 -
then maybe we can get the vendors (RH etc) to back port this into their
EL6 kernels... I'm happy to hassle the vendors if we can move forwards
on getting the newer indirect stuff in there?
As far as I'm concerned its worth its weight in gold.
--
Steven Haigh
Email: netwiz@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-05-08 11:14 ` Felipe Franciosi
@ 2013-05-22 20:13 ` Konrad Rzeszutek Wilk
2013-05-23 7:22 ` Felipe Franciosi
0 siblings, 1 reply; 25+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-05-22 20:13 UTC (permalink / raw)
To: Felipe Franciosi; +Cc: xen-devel@lists.xen.org, Steven Haigh, Roger Pau Monne
On Wed, May 08, 2013 at 11:14:26AM +0000, Felipe Franciosi wrote:
> However we didn't "prove" it properly, I think it is worth mentioning that this boils down to what we originally thought it was:
> Steven's environment is writing to a filesystem in the guest. On top of that, it's using the guest's buffer cache to do the writes.
If he is using O_DIRECT it bypasses the cache in the guest.
> This means that we cannot (easily?) control how the cache and the fs are flushing these writes through blkfront/blkback.
>
> In other words, it's very likely that it generates a workload that simply doesn't perform well on the "stock" PV protocol.
> This is a good example of how indirect descriptors help (remembering Roger and I were struggling to find use cases where indirect descriptors showed a substantial gain).
>
> Cheers,
> Felipe
>
> -----Original Message-----
> From: Roger Pau Monne
> Sent: 08 May 2013 11:45
> To: Steven Haigh
> Cc: Felipe Franciosi; xen-devel@lists.xen.org
> Subject: Re: IO speed limited by size of IO request (for RBD driver)
>
> On 08/05/13 12:32, Steven Haigh wrote:
> > On 8/05/2013 6:33 PM, Roger Pau Monné wrote:
> >> On 08/05/13 10:20, Steven Haigh wrote:
> >>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
> >>>> I noticed you copied your results from "dd", but I didn't see any conclusions drawn from experiment.
> >>>>
> >>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT?
> >>>>
> >>>> domU:
> >>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> >>>> 2048+0 records in
> >>>> 2048+0 records out
> >>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
> >>>>
> >>>> dom0:
> >>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> >>>> 2048+0 records in
> >>>> 2048+0 records out
> >>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
> >>>>
> >>>>
> >>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn't perform well on Xen's PV protocol.
> >>>
> >>> Just wondering if there is any further input on this... While DIRECT
> >>> writes are as good as can be expected, NON-DIRECT writes in certain
> >>> cases (specifically with a mdadm raid in the Dom0) are affected by
> >>> about a 50% loss in throughput...
> >>>
> >>> The hard part is that this is the default mode of writing!
> >>
> >> As another test with indirect descriptors, could you change
> >> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by
> >> default), recompile the DomU kernel and see if that helps?
> >
> > Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2
> > is running on both the Dom0 and DomU.
> >
> > # dd if=/dev/zero of=output.zero bs=1M count=2048
> > 2048+0 records in
> > 2048+0 records out
> > 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s
> >
> > avg-cpu: %user %nice %system %iowait %steal %idle
> > 0.34 0.00 17.10 0.00 0.23 82.33
> >
> > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> > avgrq-sz avgqu-sz await svctm %util
> > sdd 980.97 11936.47 53.11 429.78 4.00 48.77
> > 223.81 12.75 26.10 2.11 101.79
> > sdc 872.71 11957.87 45.98 435.67 3.55 49.30
> > 224.71 13.77 28.43 2.11 101.49
> > sde 949.26 11981.88 51.30 429.33 3.91 48.90
> > 225.03 21.29 43.91 2.27 109.08
> > sdf 915.52 11968.52 48.58 428.88 3.73 48.92
> > 225.84 21.44 44.68 2.27 108.56
> > md2 0.00 0.00 0.00 1155.61 0.00 97.51
> > 172.80 0.00 0.00 0.00 0.00
> >
> > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> > 2048+0 records in
> > 2048+0 records out
> > 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s
> >
> > avg-cpu: %user %nice %system %iowait %steal %idle
> > 0.11 0.00 13.92 0.00 0.22 85.75
> >
> > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> > avgrq-sz avgqu-sz await svctm %util
> > sdd 0.00 13986.08 0.00 263.20 0.00 55.76
> > 433.87 0.43 1.63 1.07 28.27
> > sdc 202.10 13741.55 6.52 256.57 0.81 54.77
> > 432.65 0.50 1.88 1.25 32.78
> > sde 47.96 11437.57 1.55 261.77 0.19 45.79
> > 357.63 0.80 3.02 1.85 48.60
> > sdf 2233.37 11756.13 71.93 191.38 8.99 46.80
> > 433.90 1.49 5.66 3.27 86.15
> > md2 0.00 0.00 0.00 731.93 0.00 91.49
> > 256.00 0.00 0.00 0.00 0.00
> >
> > Now this is pretty much exactly what I would expect the system to do....
> > ~96MB/sec buffered, and 85MB/sec direct.
>
> I'm sorry to be such a PITA, but could you also try with 64? If we have to increase the maximum number of indirect descriptors I would like to set it to the lowest value that provides good performance to prevent using too much memory.
>
> > So - it turns out that xen_blkif_max_segments at 32 is a killer in the
> > DomU. Now it makes me wonder what we can do about this in kernels that
> > don't have your series of patches against it? And also about the
> > backend stuff in 3.8.x etc?
>
> There isn't much we can do regarding kernels without indirect descriptors, there's no easy way to increase the number of segments in a request.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-05-22 20:13 ` Konrad Rzeszutek Wilk
@ 2013-05-23 7:22 ` Felipe Franciosi
2013-05-24 14:29 ` Konrad Rzeszutek Wilk
0 siblings, 1 reply; 25+ messages in thread
From: Felipe Franciosi @ 2013-05-23 7:22 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: xen-devel@lists.xen.org, Steven Haigh, Roger Pau Monne
On 22 May 2013, at 21:13, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com> wrote:
> On Wed, May 08, 2013 at 11:14:26AM +0000, Felipe Franciosi wrote:
>> However we didn't "prove" it properly, I think it is worth mentioning that this boils down to what we originally thought it was:
>> Steven's environment is writing to a filesystem in the guest. On top of that, it's using the guest's buffer cache to do the writes.
>
> If he is using O_DIRECT it bypasses the cache in the guest.
Certainly, but the issues were when _not_ using O_DIRECT.
F
>
>> This means that we cannot (easily?) control how the cache and the fs are flushing these writes through blkfront/blkback.
>>
>> In other words, it's very likely that it generates a workload that simply doesn't perform well on the "stock" PV protocol.
>> This is a good example of how indirect descriptors help (remembering Roger and I were struggling to find use cases where indirect descriptors showed a substantial gain).
>>
>> Cheers,
>> Felipe
>>
>> -----Original Message-----
>> From: Roger Pau Monne
>> Sent: 08 May 2013 11:45
>> To: Steven Haigh
>> Cc: Felipe Franciosi; xen-devel@lists.xen.org
>> Subject: Re: IO speed limited by size of IO request (for RBD driver)
>>
>> On 08/05/13 12:32, Steven Haigh wrote:
>>> On 8/05/2013 6:33 PM, Roger Pau Monné wrote:
>>>> On 08/05/13 10:20, Steven Haigh wrote:
>>>>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
>>>>>> I noticed you copied your results from "dd", but I didn't see any conclusions drawn from experiment.
>>>>>>
>>>>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT?
>>>>>>
>>>>>> domU:
>>>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>>>>> 2048+0 records in
>>>>>> 2048+0 records out
>>>>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>>>>>>
>>>>>> dom0:
>>>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>>>>> 2048+0 records in
>>>>>> 2048+0 records out
>>>>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
>>>>>>
>>>>>>
>>>>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn't perform well on Xen's PV protocol.
>>>>>
>>>>> Just wondering if there is any further input on this... While DIRECT
>>>>> writes are as good as can be expected, NON-DIRECT writes in certain
>>>>> cases (specifically with a mdadm raid in the Dom0) are affected by
>>>>> about a 50% loss in throughput...
>>>>>
>>>>> The hard part is that this is the default mode of writing!
>>>>
>>>> As another test with indirect descriptors, could you change
>>>> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by
>>>> default), recompile the DomU kernel and see if that helps?
>>>
>>> Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2
>>> is running on both the Dom0 and DomU.
>>>
>>> # dd if=/dev/zero of=output.zero bs=1M count=2048
>>> 2048+0 records in
>>> 2048+0 records out
>>> 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s
>>>
>>> avg-cpu: %user %nice %system %iowait %steal %idle
>>> 0.34 0.00 17.10 0.00 0.23 82.33
>>>
>>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
>>> avgrq-sz avgqu-sz await svctm %util
>>> sdd 980.97 11936.47 53.11 429.78 4.00 48.77
>>> 223.81 12.75 26.10 2.11 101.79
>>> sdc 872.71 11957.87 45.98 435.67 3.55 49.30
>>> 224.71 13.77 28.43 2.11 101.49
>>> sde 949.26 11981.88 51.30 429.33 3.91 48.90
>>> 225.03 21.29 43.91 2.27 109.08
>>> sdf 915.52 11968.52 48.58 428.88 3.73 48.92
>>> 225.84 21.44 44.68 2.27 108.56
>>> md2 0.00 0.00 0.00 1155.61 0.00 97.51
>>> 172.80 0.00 0.00 0.00 0.00
>>>
>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>>> 2048+0 records in
>>> 2048+0 records out
>>> 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s
>>>
>>> avg-cpu: %user %nice %system %iowait %steal %idle
>>> 0.11 0.00 13.92 0.00 0.22 85.75
>>>
>>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
>>> avgrq-sz avgqu-sz await svctm %util
>>> sdd 0.00 13986.08 0.00 263.20 0.00 55.76
>>> 433.87 0.43 1.63 1.07 28.27
>>> sdc 202.10 13741.55 6.52 256.57 0.81 54.77
>>> 432.65 0.50 1.88 1.25 32.78
>>> sde 47.96 11437.57 1.55 261.77 0.19 45.79
>>> 357.63 0.80 3.02 1.85 48.60
>>> sdf 2233.37 11756.13 71.93 191.38 8.99 46.80
>>> 433.90 1.49 5.66 3.27 86.15
>>> md2 0.00 0.00 0.00 731.93 0.00 91.49
>>> 256.00 0.00 0.00 0.00 0.00
>>>
>>> Now this is pretty much exactly what I would expect the system to do....
>>> ~96MB/sec buffered, and 85MB/sec direct.
>>
>> I'm sorry to be such a PITA, but could you also try with 64? If we have to increase the maximum number of indirect descriptors I would like to set it to the lowest value that provides good performance to prevent using too much memory.
>>
>>> So - it turns out that xen_blkif_max_segments at 32 is a killer in the
>>> DomU. Now it makes me wonder what we can do about this in kernels that
>>> don't have your series of patches against it? And also about the
>>> backend stuff in 3.8.x etc?
>>
>> There isn't much we can do regarding kernels without indirect descriptors, there's no easy way to increase the number of segments in a request.
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: IO speed limited by size of IO request (for RBD driver)
2013-05-23 7:22 ` Felipe Franciosi
@ 2013-05-24 14:29 ` Konrad Rzeszutek Wilk
0 siblings, 0 replies; 25+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-05-24 14:29 UTC (permalink / raw)
To: Felipe Franciosi; +Cc: Roger Pau Monne, Steven Haigh, xen-devel@lists.xen.org
On Thu, May 23, 2013 at 07:22:27AM +0000, Felipe Franciosi wrote:
>
>
> On 22 May 2013, at 21:13, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com> wrote:
>
> > On Wed, May 08, 2013 at 11:14:26AM +0000, Felipe Franciosi wrote:
> >> However we didn't "prove" it properly, I think it is worth mentioning that this boils down to what we originally thought it was:
> >> Steven's environment is writing to a filesystem in the guest. On top of that, it's using the guest's buffer cache to do the writes.
> >
> > If he is using O_DIRECT it bypasses the cache in the guest.
>
> Certainly, but the issues were when _not_ using O_DIRECT.
I am confused. Are the feature-indirect-descriptor making it worst or better when
!O_DIRECT?
Or are there no difference when using !O_DIRECT with the feature-indirect-descriptor?
>
> F
>
>
> >
> >> This means that we cannot (easily?) control how the cache and the fs are flushing these writes through blkfront/blkback.
echo 3 > /proc/..something/drop_cache
does it?
> >>
> >> In other words, it's very likely that it generates a workload that simply doesn't perform well on the "stock" PV protocol.
'fio' is an excellent tool to run the tests without using the cache.
> >> This is a good example of how indirect descriptors help (remembering Roger and I were struggling to find use cases where indirect descriptors showed a substantial gain).
You mean using the O_DIRECT? Yes, all tests that involve any I/O should use O_DIRECT.
Otherwise they are misleading. And my understanding from this thread that Steven did that
and found that:
a) without the feature-indirect-descriptor - the I/O was sucky
b) with the initial feature-indirect-descriptior - the I/O was less sucky
c) with the feature-indirect-descriptor and a tweak to the frontend of how mant
segments to use - the I/O was the same as on baremetal.
Sorry about being soo verbose here - I feel that I am missing something here and
I am not exactly sure what this is. Could you please enlighten me?
> >>
> >> Cheers,
> >> Felipe
> >>
> >> -----Original Message-----
> >> From: Roger Pau Monne
> >> Sent: 08 May 2013 11:45
> >> To: Steven Haigh
> >> Cc: Felipe Franciosi; xen-devel@lists.xen.org
> >> Subject: Re: IO speed limited by size of IO request (for RBD driver)
> >>
> >> On 08/05/13 12:32, Steven Haigh wrote:
> >>> On 8/05/2013 6:33 PM, Roger Pau Monné wrote:
> >>>> On 08/05/13 10:20, Steven Haigh wrote:
> >>>>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
> >>>>>> I noticed you copied your results from "dd", but I didn't see any conclusions drawn from experiment.
> >>>>>>
> >>>>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT?
> >>>>>>
> >>>>>> domU:
> >>>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> >>>>>> 2048+0 records in
> >>>>>> 2048+0 records out
> >>>>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
> >>>>>>
> >>>>>> dom0:
> >>>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> >>>>>> 2048+0 records in
> >>>>>> 2048+0 records out
> >>>>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
> >>>>>>
> >>>>>>
> >>>>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn't perform well on Xen's PV protocol.
> >>>>>
> >>>>> Just wondering if there is any further input on this... While DIRECT
> >>>>> writes are as good as can be expected, NON-DIRECT writes in certain
> >>>>> cases (specifically with a mdadm raid in the Dom0) are affected by
> >>>>> about a 50% loss in throughput...
> >>>>>
> >>>>> The hard part is that this is the default mode of writing!
> >>>>
> >>>> As another test with indirect descriptors, could you change
> >>>> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by
> >>>> default), recompile the DomU kernel and see if that helps?
> >>>
> >>> Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2
> >>> is running on both the Dom0 and DomU.
> >>>
> >>> # dd if=/dev/zero of=output.zero bs=1M count=2048
> >>> 2048+0 records in
> >>> 2048+0 records out
> >>> 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s
> >>>
> >>> avg-cpu: %user %nice %system %iowait %steal %idle
> >>> 0.34 0.00 17.10 0.00 0.23 82.33
> >>>
> >>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> >>> avgrq-sz avgqu-sz await svctm %util
> >>> sdd 980.97 11936.47 53.11 429.78 4.00 48.77
> >>> 223.81 12.75 26.10 2.11 101.79
> >>> sdc 872.71 11957.87 45.98 435.67 3.55 49.30
> >>> 224.71 13.77 28.43 2.11 101.49
> >>> sde 949.26 11981.88 51.30 429.33 3.91 48.90
> >>> 225.03 21.29 43.91 2.27 109.08
> >>> sdf 915.52 11968.52 48.58 428.88 3.73 48.92
> >>> 225.84 21.44 44.68 2.27 108.56
> >>> md2 0.00 0.00 0.00 1155.61 0.00 97.51
> >>> 172.80 0.00 0.00 0.00 0.00
> >>>
> >>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
> >>> 2048+0 records in
> >>> 2048+0 records out
> >>> 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s
> >>>
> >>> avg-cpu: %user %nice %system %iowait %steal %idle
> >>> 0.11 0.00 13.92 0.00 0.22 85.75
> >>>
> >>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> >>> avgrq-sz avgqu-sz await svctm %util
> >>> sdd 0.00 13986.08 0.00 263.20 0.00 55.76
> >>> 433.87 0.43 1.63 1.07 28.27
> >>> sdc 202.10 13741.55 6.52 256.57 0.81 54.77
> >>> 432.65 0.50 1.88 1.25 32.78
> >>> sde 47.96 11437.57 1.55 261.77 0.19 45.79
> >>> 357.63 0.80 3.02 1.85 48.60
> >>> sdf 2233.37 11756.13 71.93 191.38 8.99 46.80
> >>> 433.90 1.49 5.66 3.27 86.15
> >>> md2 0.00 0.00 0.00 731.93 0.00 91.49
> >>> 256.00 0.00 0.00 0.00 0.00
> >>>
> >>> Now this is pretty much exactly what I would expect the system to do....
> >>> ~96MB/sec buffered, and 85MB/sec direct.
> >>
> >> I'm sorry to be such a PITA, but could you also try with 64? If we have to increase the maximum number of indirect descriptors I would like to set it to the lowest value that provides good performance to prevent using too much memory.
> >>
> >>> So - it turns out that xen_blkif_max_segments at 32 is a killer in the
> >>> DomU. Now it makes me wonder what we can do about this in kernels that
> >>> don't have your series of patches against it? And also about the
> >>> backend stuff in 3.8.x etc?
> >>
> >> There isn't much we can do regarding kernels without indirect descriptors, there's no easy way to increase the number of segments in a request.
> >>
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org
> >> http://lists.xen.org/xen-devel
> >>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2013-05-24 14:29 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-23 13:33 IO speed limited by size of IO request (for RBD driver) Sylvain Munaut
2013-04-23 13:41 ` Steven Haigh
2013-04-23 14:06 ` Roger Pau Monné
2013-04-23 14:15 ` Sylvain Munaut
2013-04-25 13:00 ` Sylvain Munaut
[not found] ` <51769B9D.4000708@crc.id.au>
[not found] ` <51769CFD.7020907@citrix.com>
[not found] ` <51769E1E.6040902@crc.id.au>
[not found] ` <5176A19A.2010802@citrix.com>
[not found] ` <5176A440.8040303@crc.id.au>
[not found] ` <5176A520.5030503@citrix.com>
[not found] ` <5176A61F.6050607@crc.id.au>
[not found] ` <5176A6DD.5000404@citrix.com>
[not found] ` <5176AFF9.4020003@crc.id.au>
[not found] ` <5176B237.8020803@citrix.com>
[not found] ` <5176C073.3050409@crc.id.au>
[not found] ` <5176CF56.8000505@citrix.com>
[not found] ` <5176DB88.1070200@crc.id.au>
[not found] ` <517A89DA.3030804@citrix.com>
2013-04-26 14:16 ` Steven Haigh
2013-04-27 1:57 ` Steven Haigh
2013-04-27 7:06 ` Roger Pau Monné
2013-04-27 7:51 ` Steven Haigh
2013-04-27 8:35 ` Steven Haigh
2013-04-29 8:38 ` Roger Pau Monné
2013-04-29 19:26 ` Steven Haigh
2013-04-29 19:47 ` Steven Haigh
2013-04-30 10:07 ` Felipe Franciosi
2013-04-30 10:38 ` Steven Haigh
2013-05-08 8:20 ` Steven Haigh
2013-05-08 8:33 ` Roger Pau Monné
2013-05-08 8:47 ` Steven Haigh
2013-05-08 10:32 ` Steven Haigh
2013-05-08 10:45 ` Roger Pau Monné
2013-05-08 11:14 ` Felipe Franciosi
2013-05-22 20:13 ` Konrad Rzeszutek Wilk
2013-05-23 7:22 ` Felipe Franciosi
2013-05-24 14:29 ` Konrad Rzeszutek Wilk
2013-05-08 12:56 ` Steven Haigh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).