From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=50083 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OwY7L-0003o5-JH for qemu-devel@nongnu.org; Fri, 17 Sep 2010 06:24:33 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OwY7J-0000PP-MF for qemu-devel@nongnu.org; Fri, 17 Sep 2010 06:24:31 -0400 Received: from mx1.redhat.com ([209.132.183.28]:30141) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OwY7J-0000Ox-FN for qemu-devel@nongnu.org; Fri, 17 Sep 2010 06:24:29 -0400 Message-ID: <4C9341F1.2090406@redhat.com> Date: Fri, 17 Sep 2010 12:24:49 +0200 From: Kevin Wolf MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH] Improve qemu-nbd performance by 4400 % References: <1284663278-7487-1-git-send-email-laurent@vivier.eu> In-Reply-To: <1284663278-7487-1-git-send-email-laurent@vivier.eu> Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Laurent Vivier Cc: qemu-devel Am 16.09.2010 20:54, schrieb Laurent Vivier: > This patch allows to reduce the boot time from an NBD server from 225 seconds to > 5 seconds (time between the "boot cd:0" and the kernel init) for the > following command lines: > > ./qemu-nbd -t ../ISO/debian-500-powerpc-netinst.iso > and > ./ppc-softmmu/qemu-system-ppc -cdrom nbd:localhost:1024 > > Signed-off-by: Laurent Vivier I agree with Stefan. It's good to have a description of the results in the commit message, but describing what has actually changed from a technical perspective would be helpful, too. > --- > nbd.c | 20 +++++++++++++++----- > 1 files changed, 15 insertions(+), 5 deletions(-) > > diff --git a/nbd.c b/nbd.c > index 011b50f..5d7c758 100644 > --- a/nbd.c > +++ b/nbd.c > @@ -655,7 +655,7 @@ int nbd_trip(BlockDriverState *bs, int csock, off_t size, uint64_t dev_offset, > if (nbd_receive_request(csock, &request) == -1) > return -1; > > - if (request.len > data_size) { > + if (request.len + sizeof(struct nbd_reply) > data_size) { > LOG("len (%u) is larger than max len (%u)", > request.len, data_size); > errno = EINVAL; > @@ -687,7 +687,8 @@ int nbd_trip(BlockDriverState *bs, int csock, off_t size, uint64_t dev_offset, > case NBD_CMD_READ: > TRACE("Request type is READ"); > > - if (bdrv_read(bs, (request.from + dev_offset) / 512, data, > + if (bdrv_read(bs, (request.from + dev_offset) / 512, > + data + sizeof(struct nbd_reply), > request.len / 512) == -1) { > LOG("reading from file failed"); > errno = EINVAL; > @@ -697,12 +698,21 @@ int nbd_trip(BlockDriverState *bs, int csock, off_t size, uint64_t dev_offset, > > TRACE("Read %u byte(s)", request.len); > > - if (nbd_send_reply(csock, &reply) == -1) > - return -1; > + /* Reply > + [ 0 .. 3] magic (NBD_REPLY_MAGIC) > + [ 4 .. 7] error (0 == no error) > + [ 7 .. 15] handle > + */ > + > + cpu_to_be32w((uint32_t*)data, NBD_REPLY_MAGIC); > + cpu_to_be32w((uint32_t*)(data + 4), reply.error); > + cpu_to_be64w((uint64_t*)(data + 8), reply.handle); Hm, if I understand this right, you rely on the compiler padding out structs here. You reserved sizeof(struct nbd_reply) bytes and the struct is defined like this: struct nbd_reply { uint32_t error; uint64_t handle; }; So isn't it pure luck that the compiler does the right thing and gives you 16 bytes? If you want to use the struct for this, you should add a uint32_t magic to it and make it packed. Kevin