On 08/15/2011 12:26 AM, Marcelo Tosatti wrote: > Actually the previous patchset does not traverse the ramlist without > qemu_mutex locked, which is safe versus the most-recently-used-block > optimization. Actually it does: bytes_transferred_last = bytes_transferred; bwidth = qemu_get_clock_ns(rt_clock); + if (stage != 3) { + qemu_mutex_lock_ramlist(); + qemu_mutex_unlock_iothread(); + } + while (!qemu_file_rate_limit(f)) { int bytes_sent; /* ram_save_block does traverse memory. */ bytes_sent = ram_save_block(f); bytes_transferred += bytes_sent; if (bytes_sent == 0) { /* no more blocks */ break; } } + if (stage != 3) { + qemu_mutex_lock_iothread(); + qemu_mutex_unlock_ramlist(); + } + bwidth = qemu_get_clock_ns(rt_clock) - bwidth; bwidth = (bytes_transferred - bytes_transferred_last) / bwidth; What Umesh is doing is using "either ramlist mutex or iothread mutex" when reading the ramlist, and "both" when writing the ramlist; similar to rwlocks done with a regular mutex per CPU---clever! So this: + qemu_mutex_lock_ramlist(); QLIST_REMOVE(block, next); QLIST_INSERT_HEAD(&ram_list.blocks, block, next); + qemu_mutex_unlock_ramlist(); is effectively upgrading the lock from read-side to write-side, assuming that qemu_get_ram_ptr is never called from the migration thread (which is true). However, I propose that you put the MRU order in a separate list. You would still need two locks: the IO thread lock to protect the new list, a new lock to protect the other fields in the ram_list. For simplicity you may skip the new lock if you assume that the migration and I/O threads never modify the list concurrently, which is true. And more importantly, the MRU and migration code absolutely do not affect each other, because indeed the migration thread does not do MRU accesses. See the attachment for an untested patch. Paolo