From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=50327 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Pov55-0001Mk-8S for qemu-devel@nongnu.org; Mon, 14 Feb 2011 04:50:56 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Pov53-0000vz-Ie for qemu-devel@nongnu.org; Mon, 14 Feb 2011 04:50:54 -0500 Received: from david.siemens.de ([192.35.17.14]:18939) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Pov53-0000vP-7c for qemu-devel@nongnu.org; Mon, 14 Feb 2011 04:50:53 -0500 Message-ID: <4D58FAF2.6070106@siemens.com> Date: Mon, 14 Feb 2011 10:50:42 +0100 From: Jan Kiszka MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] [RFC] Some more io-thread optimizations List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel Cc: Paolo Bonzini , Anthony Liguori , Marcelo Tosatti , Aurelien Jarno Hi, patch below further reduces the io-thread overhead in tcg mode so that specifically emulating smp boxes gets noticeably faster. Its essence: poll the file descriptors until select returns 0, keeping the global mutex locked. This reduces ping pong with the vcpu threads, most noticeably in tcg mode where we run in lock-step. Split up in two patches, I'm planning to route those changes via the kvm queue (as they collide with other patches there). Jan --------8<--------- diff --git a/sysemu.h b/sysemu.h index 23ae17e..0a69464 100644 --- a/sysemu.h +++ b/sysemu.h @@ -73,7 +73,7 @@ void cpu_synchronize_all_post_init(void); void qemu_announce_self(void); -void main_loop_wait(int nonblocking); +int main_loop_wait(int nonblocking); bool qemu_savevm_state_blocked(Monitor *mon); int qemu_savevm_state_begin(Monitor *mon, QEMUFile *f, int blk_enable, diff --git a/vl.c b/vl.c index ed2cdfa..66b7c6f 100644 --- a/vl.c +++ b/vl.c @@ -1311,7 +1311,7 @@ void qemu_system_powerdown_request(void) qemu_notify_event(); } -void main_loop_wait(int nonblocking) +int main_loop_wait(int nonblocking) { IOHandlerRecord *ioh; fd_set rfds, wfds, xfds; @@ -1356,9 +1356,16 @@ void main_loop_wait(int nonblocking) slirp_select_fill(&nfds, &rfds, &wfds, &xfds); - qemu_mutex_unlock_iothread(); + if (timeout > 0) { + qemu_mutex_unlock_iothread(); + } + ret = select(nfds + 1, &rfds, &wfds, &xfds, &tv); - qemu_mutex_lock_iothread(); + + if (timeout > 0) { + qemu_mutex_lock_iothread(); + } + if (ret > 0) { IOHandlerRecord *pioh; @@ -1386,6 +1393,7 @@ void main_loop_wait(int nonblocking) them. */ qemu_bh_poll(); + return ret; } static int vm_can_run(void) @@ -1405,6 +1413,7 @@ qemu_irq qemu_system_powerdown; static void main_loop(void) { + int last_io = 0; int r; qemu_main_loop_start(); @@ -1421,7 +1430,12 @@ static void main_loop(void) #ifdef CONFIG_PROFILER ti = profile_getclock(); #endif - main_loop_wait(nonblocking); +#ifdef CONFIG_IOTHREAD + if (last_io > 0) { + nonblocking = true; + } +#endif + last_io = main_loop_wait(nonblocking); #ifdef CONFIG_PROFILER dev_time += profile_getclock() - ti; #endif