From mboxrd@z Thu Jan 1 00:00:00 1970 From: Umesh Deshpande Subject: Re: [RFC PATCH v3 3/4] lock to protect memslots Date: Mon, 15 Aug 2011 16:27:02 -0400 Message-ID: <4E498116.9030800@redhat.com> References: <4E440131.6020200@redhat.com> <4E44CC1E.202@redhat.com> <20110815072648.GA2916@amt.cnet> <4E4929B8.2010509@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , kvm@vger.kernel.org, quintela@redhat.com To: Paolo Bonzini Return-path: Received: from mx1.redhat.com ([209.132.183.28]:27415 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751287Ab1HOU1D (ORCPT ); Mon, 15 Aug 2011 16:27:03 -0400 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p7FKR3Qi025269 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 15 Aug 2011 16:27:03 -0400 In-Reply-To: <4E4929B8.2010509@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 08/15/2011 10:14 AM, Paolo Bonzini wrote: > On 08/15/2011 12:26 AM, Marcelo Tosatti wrote: >> Actually the previous patchset does not traverse the ramlist without >> qemu_mutex locked, which is safe versus the most-recently-used-block >> optimization. > Actually it does: > > bytes_transferred_last = bytes_transferred; > bwidth = qemu_get_clock_ns(rt_clock); > > + if (stage != 3) { > + qemu_mutex_lock_ramlist(); > + qemu_mutex_unlock_iothread(); > + } > + > while (!qemu_file_rate_limit(f)) { > int bytes_sent; > > /* ram_save_block does traverse memory. */ > bytes_sent = ram_save_block(f); > bytes_transferred += bytes_sent; > if (bytes_sent == 0) { /* no more blocks */ > break; > } > } > > + if (stage != 3) { > + qemu_mutex_lock_iothread(); > + qemu_mutex_unlock_ramlist(); > + } > + > bwidth = qemu_get_clock_ns(rt_clock) - bwidth; > bwidth = (bytes_transferred - bytes_transferred_last) / bwidth; > > > What Umesh is doing is using "either ramlist mutex or iothread mutex" when reading > the ramlist, and "both" when writing the ramlist; similar to rwlocks done with a > regular mutex per CPU---clever! So this: > > + qemu_mutex_lock_ramlist(); > QLIST_REMOVE(block, next); > QLIST_INSERT_HEAD(&ram_list.blocks, block, next); > + qemu_mutex_unlock_ramlist(); > > is effectively upgrading the lock from read-side to write-side, assuming that > qemu_get_ram_ptr is never called from the migration thread (which is true). > > However, I propose that you put the MRU order in a separate list. You would still > need two locks: the IO thread lock to protect the new list, a new lock to protect > the other fields in the ram_list. For simplicity you may skip the new lock if you > assume that the migration and I/O threads never modify the list concurrently, > which is true. Yes, the mru list patch would obviate the need of holding the ram_list mutex in qemu_get_ram_ptr. Also, I was planning to protect the whole migration thread with iothread mutex, and ram_list mutex. (i.e. holding ram_list mutex while sleeping between two iterations, when we release iothread mutex). This will prevent the memslot block removal altogether during the migration. Do you see any problem with this? > And more importantly, the MRU and migration code absolutely do not > affect each other, because indeed the migration thread does not do MRU accesses. > See the attachment for an untested patch. > > Paolo Thanks Umesh