From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=46080 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OwaWG-0007h1-DU for qemu-devel@nongnu.org; Fri, 17 Sep 2010 08:58:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OwaWE-0007zb-WD for qemu-devel@nongnu.org; Fri, 17 Sep 2010 08:58:24 -0400 Received: from moutng.kundenserver.de ([212.227.17.9]:61875) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OwaWE-0007yZ-Ij for qemu-devel@nongnu.org; Fri, 17 Sep 2010 08:58:22 -0400 From: Laurent Vivier Message-Id: <15052289.110377.1284728283758.JavaMail.servlet@pustefix159> Subject: Re: [Qemu-devel] [PATCH] Improve qemu-nbd performance by 4400 % MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Date: Fri, 17 Sep 2010 14:58:04 +0200 List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: kwolf@redhat.com Cc: qemu-devel@nongnu.org >Am 16.09.2010 20:54, schrieb Laurent Vivier: >> This patch allows to reduce the boot time from an NBD server from 225=20 >seconds to >> 5 seconds (time between the "boot cd:0" and the kernel init) for the >> following command lines: >>=20 >> ./qemu-nbd -t ../ISO/debian-500-powerpc-netinst.iso >> and >> ./ppc-softmmu/qemu-system-ppc -cdrom nbd:localhost:1024 >>=20 >> Signed-off-by: Laurent Vivier > >I agree with Stefan. It's good to have a description of the results in >the commit message, but describing what has actually changed from a >technical perspective would be helpful, too. OK. >> --- >> nbd.c | 20 +++++++++++++++----- >> 1 files changed, 15 insertions(+), 5 deletions(-) >>=20 >> diff --git a/nbd.c b/nbd.c >> index 011b50f..5d7c758 100644 >> --- a/nbd.c >> +++ b/nbd.c >> @@ -655,7 +655,7 @@ int nbd_trip(BlockDriverState *bs, int csock, off_t= =20 >size, uint64_t dev_offset, >> =09if (nbd_receive_request(csock, &request) =3D=3D -1) >> =09=09return -1; >> =20 >> -=09if (request.len > data_size) { >> +=09if (request.len + sizeof(struct nbd_reply) > data_size) { >> =09=09LOG("len (%u) is larger than max len (%u)", >> =09=09 request.len, data_size); >> =09=09errno =3D EINVAL; >> @@ -687,7 +687,8 @@ int nbd_trip(BlockDriverState *bs, int csock, off_t= =20 >size, uint64_t dev_offset, >> =09case NBD_CMD_READ: >> =09=09TRACE("Request type is READ"); >> =20 >> -=09=09if (bdrv_read(bs, (request.from + dev_offset) / 512, data, >> +=09=09if (bdrv_read(bs, (request.from + dev_offset) / 512, >> +=09=09=09 data + sizeof(struct nbd_reply), >> =09=09=09 request.len / 512) =3D=3D -1) { >> =09=09=09LOG("reading from file failed"); >> =09=09=09errno =3D EINVAL; >> @@ -697,12 +698,21 @@ int nbd_trip(BlockDriverState *bs, int csock, off_= t=20 >size, uint64_t dev_offset, >> =20 >> =09=09TRACE("Read %u byte(s)", request.len); >> =20 >> -=09=09if (nbd_send_reply(csock, &reply) =3D=3D -1) >> -=09=09=09return -1; >> +=09=09/* Reply >> +=09=09 [ 0 .. 3] magic (NBD_REPLY_MAGIC) >> +=09=09 [ 4 .. 7] error (0 =3D=3D no error) >> +=09=09 [ 7 .. 15] handle >> +=09=09 */ >> + >> +=09=09cpu_to_be32w((uint32_t*)data, NBD_REPLY_MAGIC); >> +=09=09cpu_to_be32w((uint32_t*)(data + 4), reply.error); >> +=09=09cpu_to_be64w((uint64_t*)(data + 8), reply.handle); > >Hm, if I understand this right, you rely on the compiler padding out >structs here. You reserved sizeof(struct nbd_reply) bytes and the struct >is defined like this: > >struct nbd_reply { > uint32_t error; > uint64_t handle; >}; > >So isn't it pure luck that the compiler does the right thing and gives >you 16 bytes? If you want to use the struct for this, you should add a >uint32_t magic to it and make it packed. > Yes, it's pure luck, I will add a NBD_REPLY_SIZE defined to 16 and will rep= lace the sizeof() by it. Regards, Laurent --=20 --------------------- Laurent@vivier.eu --------------------- "Tout ce qui est impossible reste =C3=A0 accomplir" Jules Verne "Things are only impossible until they're not" Jean-Luc Picard