From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57151) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Uq1Ad-0005jw-6t for qemu-devel@nongnu.org; Fri, 21 Jun 2013 09:14:32 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Uq1AX-00027z-QT for qemu-devel@nongnu.org; Fri, 21 Jun 2013 09:14:31 -0400 Received: from mx1.redhat.com ([209.132.183.28]:2580) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Uq1AX-00027o-Jy for qemu-devel@nongnu.org; Fri, 21 Jun 2013 09:14:25 -0400 Date: Fri, 21 Jun 2013 15:14:15 +0200 From: Kevin Wolf Message-ID: <20130621131415.GG2986@dhcp-200-207.str.redhat.com> References: <1371752409-16313-1-git-send-email-pl@kamp.de> <1371752409-16313-2-git-send-email-pl@kamp.de> <20130621091842.GB2986@dhcp-200-207.str.redhat.com> <51C420C4.6040503@kamp.de> <20130621110707.GF2986@dhcp-200-207.str.redhat.com> <51C43C17.3040103@kamp.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <51C43C17.3040103@kamp.de> Subject: Re: [Qemu-devel] [PATCH 1/2] iscsi: add support for bdrv_co_is_allocated() List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Lieven Cc: pbonzini@redhat.com, ronniesahlberg@gmail.com, qemu-devel@nongnu.org, Stefan Hajnoczi Am 21.06.2013 um 13:42 hat Peter Lieven geschrieben: > Am 21.06.2013 13:07, schrieb Kevin Wolf: > > Am 21.06.2013 um 11:45 hat Peter Lieven geschrieben: > >> Am 21.06.2013 11:18, schrieb Kevin Wolf: > >>> Am 20.06.2013 um 20:20 hat Peter Lieven geschrieben: > >>>> Signed-off-by: Peter Lieven > >>>> --- > >>>> block/iscsi.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > >>>> 1 file changed, 57 insertions(+) > >>>> > >>>> diff --git a/block/iscsi.c b/block/iscsi.c > >>>> index 0bbf0b1..e6b966d 100644 > >>>> --- a/block/iscsi.c > >>>> +++ b/block/iscsi.c > >>>> @@ -49,6 +49,7 @@ typedef struct IscsiLun { > >>>> uint64_t num_blocks; > >>>> int events; > >>>> QEMUTimer *nop_timer; > >>>> + uint8_t lbpme; > >>>> } IscsiLun; > >>>> > >>>> typedef struct IscsiAIOCB { > >>>> @@ -800,6 +801,60 @@ iscsi_getlength(BlockDriverState *bs) > >>>> return len; > >>>> } > >>>> > >>>> +static int coroutine_fn iscsi_co_is_allocated(BlockDriverState *bs, > >>>> + int64_t sector_num, > >>>> + int nb_sectors, int *pnum) > >>>> +{ > >>>> + IscsiLun *iscsilun = bs->opaque; > >>>> + struct scsi_task *task = NULL; > >>>> + struct scsi_get_lba_status *lbas = NULL; > >>>> + struct scsi_lba_status_descriptor *lbasd = NULL; > >>>> + int ret; > >>>> + > >>>> + *pnum = nb_sectors; > >>>> + > >>>> + if (iscsilun->lbpme == 0) { > >>>> + return 1; > >>>> + } > >>>> + > >>>> + /* in-flight requests could invalidate the lba status result */ > >>>> + while (iscsi_process_flush(iscsilun)) { > >>>> + qemu_aio_wait(); > >>>> + } > >>> Note that you're blocking here. The preferred way would be something > >>> involving a yield from the coroutine and a reenter as soon as all > >>> requests are done. Maybe a CoRwLock does what you need? > >> Is there a document how to use it? Or can you help here? > > The idea would be to take a read lock while any request is in flight > > (i.e. qemu_co_rwlock_rdlock() before it's started and > > qemu_co_rwlock_unlock() when it completes), and to take a write lock > > (qemu_co_rwlock_wrlock) for the part of iscsi_co_is_allocated() that > > requires that no other request runs in parallel. > Wouldn't this require that all the other operations in iscsi.c would > also take these lock? Currently there are only aio requests > implemented as it seems. Hm, okay, that makes it a bit harder because AIO callbacks aren't called in coroutine context. So taking the lock would be easy, but releasing them could turn out somewhat tricky. > Would it help here to add sth like this? > > while (iscsi_process_flush(iscsilun)) { > if (qemu_in_coroutine()) { > qemu_coroutine_yield(); > } else { > qemu_aio_wait(); > } > } You're always in a coroutine here. The problem is that if you yield, you need to reenter the coroutine at some point or the is_allocated request would never complete. That's basically what the coroutine locks do for you: They yield and only reenter when the lock can be taken. Kevin