From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60558) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YWOmp-0003Za-Jp for qemu-devel@nongnu.org; Fri, 13 Mar 2015 08:33:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YWOml-00037l-FH for qemu-devel@nongnu.org; Fri, 13 Mar 2015 08:33:55 -0400 Received: from mail-wi0-x234.google.com ([2a00:1450:400c:c05::234]:33077) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YWOml-00037e-7e for qemu-devel@nongnu.org; Fri, 13 Mar 2015 08:33:51 -0400 Received: by widfb4 with SMTP id fb4so11695522wid.0 for ; Fri, 13 Mar 2015 05:33:50 -0700 (PDT) Sender: Paolo Bonzini Message-ID: <5502D92A.5070907@redhat.com> Date: Fri, 13 Mar 2015 13:33:46 +0100 From: Paolo Bonzini MIME-Version: 1.0 References: <1426210723-16735-1-git-send-email-famz@redhat.com> <1426210723-16735-5-git-send-email-famz@redhat.com> <55029C22.1070901@redhat.com> <20150313085847.GC3527@ad.nay.redhat.com> <5502C074.2060807@redhat.com> In-Reply-To: <5502C074.2060807@redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v2 4/4] dma-helpers: Move reschedule_dma BH to blk's AioContext List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Fam Zheng Cc: qemu-devel@nongnu.org On 13/03/2015 11:48, Paolo Bonzini wrote: > > The other possibility is grab a reference for the cpu_register_map_client call, > > and release it in reschedule_dma. This way the atomics can keep, but we'll need > > a "finished" flag in DMAAIOCB to avoid double completion. > Considering this is a slow path, a lock seems preferrable. And another problem... You need to be careful about dma_aio_cancel running together with the continue_after_map_failure, because continue_after_map_failure can be called by another thread. You could have continue_after_map_failure dma_aio_cancel ------------------------------------------------------------------ aio_bh_new qemu_bh_delete qemu_bh_schedule (use after free) To fix this, my suggestion is to pass a BH directly to cpu_register_map_client (possibly to cpu_unregister_map_client as well? seems to have pros and cons). Then cpu_notify_clients can run entirely with the lock taken, and not race against cpu_unregister_map_client. dma_aio_cancel can just do cpu_unregister_map_client followed by qemu_bh_delete. Paolo