From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55015) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V9EAa-0000eZ-J6 for qemu-devel@nongnu.org; Tue, 13 Aug 2013 08:57:59 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1V9EAR-000206-NG for qemu-devel@nongnu.org; Tue, 13 Aug 2013 08:57:52 -0400 Received: from david.siemens.de ([192.35.17.14]:34799) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V9EAR-0001zj-E3 for qemu-devel@nongnu.org; Tue, 13 Aug 2013 08:57:43 -0400 Message-ID: <520A2D3D.7060201@siemens.com> Date: Tue, 13 Aug 2013 14:57:33 +0200 From: Jan Kiszka MIME-Version: 1.0 References: <1376239405-4084-1-git-send-email-alex@alex.org.uk> <520A2511.4000709@siemens.com> <781DC0DE-B781-45C0-9351-32925EF51BF9@alex.org.uk> In-Reply-To: <781DC0DE-B781-45C0-9351-32925EF51BF9@alex.org.uk> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC] [PATCHv10 00/31] aio / timers: Add AioContext timers and use ppoll List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alex Bligh Cc: Kevin Wolf , Anthony Liguori , qemu-devel@nongnu.org, liu ping fan , Stefan Hajnoczi , Paolo Bonzini , MORITA Kazutaka , rth@twiddle.net On 2013-08-13 14:44, Alex Bligh wrote: > > On 13 Aug 2013, at 13:22, Jan Kiszka wrote: > >> With tweaking I mean: >> >> bool aio_poll(AioContext *ctx, bool blocking, >> void (*blocking_cb)(bool, void *), >> void *blocking_cb_opaque); >> >> i.e. adding a callback that aio_poll will invoke before and right after >> waiting for events/timeouts. This allows to drop/reacquire locks that >> protect data structures used both by the timer thread and other threads >> running the device model. > > That's interesting. I didn't give a huge amount of thought > to thread extensibility (not least as the locking needed > fixing first), but the model I had in my head was not that > the locks were taken on exit from qemu_poll_ns and > released on entry to it, but rather that the individual > dispatch functions and timer functions called only took whatever > locks they needed as and when they needed them. IE everything > would already be unlocked prior to calling qemu_poll_ns. > I suppose both would work. Well, all the timer machinery requires some locking as well. So one option is to add this to the core, the other - the one that I'm following - is to push the locking to the timer users. The advantage of the latter approach is that you can often reuse existing locks instead of extending their number excessively, potentially causing ordering issues. The locks to be reused are, or course, the BQL or device model locks, like in my RTC scenario. Or think of a networking backend like slirp: TCP timers could run under the same lock that is also protecting the rest of a slirp instance state machine. Well, not sure we can gain a lot by threading slirp, but the concept remains the same. Jan -- Siemens AG, Corporate Technology, CT RTC ITP SES-DE Corporate Competence Center Embedded Linux