From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LR6eI-0003ST-3a for qemu-devel@nongnu.org; Sun, 25 Jan 2009 10:11:46 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LR6eG-0003SH-N6 for qemu-devel@nongnu.org; Sun, 25 Jan 2009 10:11:45 -0500 Received: from [199.232.76.173] (port=59494 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LR6eG-0003SE-I4 for qemu-devel@nongnu.org; Sun, 25 Jan 2009 10:11:44 -0500 Received: from rn-out-0910.google.com ([64.233.170.189]:34945) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LR6eG-0007uD-9d for qemu-devel@nongnu.org; Sun, 25 Jan 2009 10:11:44 -0500 Received: by rn-out-0910.google.com with SMTP id j36so1575514rne.8 for ; Sun, 25 Jan 2009 07:11:43 -0800 (PST) Message-ID: <497C8121.9080903@codemonkey.ws> Date: Sun, 25 Jan 2009 09:11:29 -0600 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH] Disable AIO for Mac OS X References: <1232827167-19058-1-git-send-email-agraf@suse.de> <497B7A03.6040905@codemonkey.ws> <497B7FAD.30005@codemonkey.ws> <71F46A21-2F3F-4526-BDE2-F5BD8312244D@suse.de> <497B8736.5040902@codemonkey.ws> <18D68CC9-539B-42E8-8A11-1F8570C96C56@suse.de> <497BA3C7.1010302@codemonkey.ws> <43377A16-D52E-4C31-8112-BF565A35304B@suse.de> In-Reply-To: <43377A16-D52E-4C31-8112-BF565A35304B@suse.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alexander Graf Cc: qemu-devel@nongnu.org Alexander Graf wrote: > The kill() is called, but we're never receiving the signal. Also when > I kill -31 manually from the outside, the signal handler isn't invoked. Anyone know much about signal delivery in Darwin? Is there a way to do thread signaling directly? >> If for some crazy reason the OS X port spawns another thread >> somewhere without masking SIGUSR2 correctly, it could be that the >> signal is getting lost. > > Hm - according to gdb things look pretty normal, no? > > (gdb) thread apply all bt > > Thread 2 (process 804 thread 0x1003): > #0 0x91b3c3ae in __semwait_signal () > #1 0x91b67326 in _pthread_cond_wait () > #2 0x91b8c9f0 in pthread_cond_timedwait$UNIX2003 () > #3 0x000157b5 in aio_thread (unused=0x0) at posix-aio-compat.c:52 > #4 0x91b66095 in _pthread_start () > #5 0x91b65f52 in thread_start () > > Thread 1 (process 804 thread 0x10b): > #0 0x91b846f2 in select$DARWIN_EXTSN () > #1 0x00081443 in qemu_aio_wait () at aio.c:173 > #2 0x00080ef5 in bdrv_read_em (bs=0x4, sector_num=0, buf=0x4
0x4 out of bounds>, nb_sectors=4) at block.c:1447 > #3 0x0007f9c9 in bdrv_guess_geometry (bs=0x806a00, pcyls=0xbfffdfcc, > pheads=0xbfffdfc8, psecs=0xbfffdfc4) at block.c:773 > #4 0x0002a238 in ide_init2 (ide_state= due to optimizations>, hd0=0x806a00, hd1=0x0, irq=0x402a18) at > /Users/alex/work/qemu-osx/qemu/hw/ide.c:2844 > #5 0x0002af2d in pci_piix3_ide_init (bus=0x4, hd_table=0xbfffeaf0, > devfn=4, pic=0x402930) at /Users/alex/work/qemu-osx/qemu/hw/ide.c:3435 > #6 0x00044199 in pc_init1 (ram_size= due to optimizations>, vga_ram_size=8388608, boot_device=0x11d946 > "cad", kernel_filename=0x0, kernel_cmdline=0x11d33c "", > initrd_filename=0x0, pci_enabled=1, cpu_model=0x0) at > /Users/alex/work/qemu-osx/qemu/hw/pc.c:1027 > #7 0x000066e1 in main (argc=5, argv=0xbffff360, envp=0xbffff378) at > /Users/alex/work/qemu-osx/qemu/vl.c:5520 Can you dump the sigmasks of each thread to see if they're blocking them correctly? Perhaps thread 2 is receiving the SIGUSR2 for some weird reason? Maybe try a different signal. Maybe SIGUSR2 has some significance in Darwin. Regards, Anthony Liguori > Alex > >> >> >> Regards, >> >> Anthony Liguori >> >>>> FWIW, at this point, we could drop the signal entirely and just use >>>> a pipe for communication. Right now we use a signal that we catch >>>> and then write to a pipe from the signal handler. We did this >>>> because that's how posix-aio worked but since we don't use >>>> posix-aio anymore, we're no longer limited by that. >>> >>> Hum - sounds like more effort and more probable breakage than >>> tracking this down ;-). >> >>> Alex >> >