From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:56828) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ShiyX-0005hB-P5 for qemu-devel@nongnu.org; Thu, 21 Jun 2012 11:07:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ShiyW-0001eO-6C for qemu-devel@nongnu.org; Thu, 21 Jun 2012 11:07:13 -0400 Received: from mail-gg0-f173.google.com ([209.85.161.173]:43705) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ShiyV-0001e8-Vy for qemu-devel@nongnu.org; Thu, 21 Jun 2012 11:07:12 -0400 Received: by ggnp1 with SMTP id p1so584328ggn.4 for ; Thu, 21 Jun 2012 08:07:10 -0700 (PDT) From: Liu Ping Fan Date: Thu, 21 Jun 2012 23:06:56 +0800 Message-Id: <1340291218-11669-1-git-send-email-qemulist@gmail.com> Subject: [Qemu-devel] [RFC] use little granularity lock to substitue qemu_mutex_lock_iothread List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Nowadays, we use qemu_mutex_lock_iothread()/qemu_mutex_unlock_iothread() to protect the race to access the emulated dev launched by vcpu threads & iothread. But this lock is too big. We can break it down. These patches separate the CPUArchState's protection from the other devices, so we can have a per-cpu lock for each CPUArchState, not the big lock any longer.