From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:52671) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Tr3vY-00056S-M6 for qemu-devel@nongnu.org; Fri, 04 Jan 2013 04:51:03 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Tr3vU-0001fI-5x for qemu-devel@nongnu.org; Fri, 04 Jan 2013 04:51:00 -0500 Received: from mx1.redhat.com ([209.132.183.28]:37230) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Tr3vT-0001fD-VJ for qemu-devel@nongnu.org; Fri, 04 Jan 2013 04:50:56 -0500 Date: Fri, 4 Jan 2013 10:50:35 +0100 From: Stefan Hajnoczi Message-ID: <20130104095035.GC14426@stefanha-thinkpad.redhat.com> References: <50E57089.7080805@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] pthread_create failed: Resource temporarily unavailable List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Christoffer Dall Cc: Peter Maydell , "kvmarm@lists.cs.columbia.edu" , Andreas =?iso-8859-1?Q?F=E4rber?= , "qemu-devel@nongnu.org Developers" On Thu, Jan 03, 2013 at 01:53:20PM -0500, Christoffer Dall wrote: > On Thu, Jan 3, 2013 at 6:50 AM, Andreas F=E4rber wro= te: > > > >> The culprit seems to be when the process runs out of virtual address > >> space on 32-bit systems due to some subsystem (virtio?) creating a > >> large number of pthreads under heavy workloads. > >> > >> Unfortunately my QEMU expertise is too limited to pin-point the exac= t > >> fix, nor do I have resources right now to go into it, but I wanted t= o > >> raise this issue and spread general awareness. > >> > >> Is this a known issue or something that needs to be tracked/document= ed at least? > > > > It is a known issue that I reported long ago, but there have been hig= her > > priorities. ;) > > Note that this failure is not specifically about creating threads but > > about thread creation being one of severa >=20 > hmmm, tried to look at the output of configure, and it does give me thi= s: >=20 > coroutine backend ucontext >=20 >=20 > running qemu-system-arm in GDB outputs this when doing disk IO (like > untar'ing a kernel tree): >=20 > [New Thread 0x5045b470 (LWP 6184)] > [New Thread 0x4f148470 (LWP 6185)] > [New Thread 0x4e6ff470 (LWP 6186)] > [New Thread 0x49af5470 (LWP 6187)] > [New Thread 0x492f5470 (LWP 6188)] > [New Thread 0x48af5470 (LWP 6189)] > [New Thread 0x482f5470 (LWP 6190)] > [New Thread 0x47af5470 (LWP 6191)] > [New Thread 0x472f5470 (LWP 6192)] > [New Thread 0x46af5470 (LWP 6193)] > [New Thread 0x462f5470 (LWP 6194)] > [New Thread 0x45af5470 (LWP 6195)] > [New Thread 0x452f5470 (LWP 6196)] > [New Thread 0x44af5470 (LWP 6197)] > [New Thread 0x442f5470 (LWP 6198)] > [New Thread 0x43af5470 (LWP 6199)] > [New Thread 0x432f5470 (LWP 6200)] > [New Thread 0x42af5470 (LWP 6201)] > [New Thread 0x422f5470 (LWP 6202)] > [New Thread 0x41af5470 (LWP 6203)] > [New Thread 0x412f5470 (LWP 6204)] > [New Thread 0x40af5470 (LWP 6205)] > [New Thread 0x402f5470 (LWP 6206)] > [New Thread 0x3faf5470 (LWP 6207)] > [New Thread 0x3f2f5470 (LWP 6208)] > [New Thread 0x3eaf5470 (LWP 6209)] > [New Thread 0x3e2f5470 (LWP 6210)] > [New Thread 0x3daf5470 (LWP 6211)] > [New Thread 0x3d2f5470 (LWP 6212)] > [New Thread 0x3caf5470 (LWP 6213)] > [New Thread 0x3c2f5470 (LWP 6214)] > [New Thread 0x3baf5470 (LWP 6215)] > [New Thread 0x3b2f5470 (LWP 6216)] > [New Thread 0x3aaf5470 (LWP 6217)] > [New Thread 0x3a2f5470 (LWP 6218)] > [New Thread 0x39af5470 (LWP 6219)] > [New Thread 0x392f5470 (LWP 6220)] > [New Thread 0x38af5470 (LWP 6221)] > [New Thread 0x380ff470 (LWP 6222)] > [New Thread 0x378ff470 (LWP 6223)] > [New Thread 0x366f7470 (LWP 6224)] > [New Thread 0x339d2470 (LWP 6225)] > [New Thread 0x331d2470 (LWP 6226)] > [New Thread 0x36eff470 (LWP 6227)] > [New Thread 0x35ef7470 (LWP 6228)] > [New Thread 0x356f7470 (LWP 6229)] > [New Thread 0x329d2470 (LWP 6230)] > [New Thread 0x321d2470 (LWP 6231)] > [New Thread 0x4bff9470 (LWP 6232)] > [New Thread 0x349f2470 (LWP 6234)] > [New Thread 0x305be470 (LWP 6235)] > [New Thread 0x2fdbe470 (LWP 6236)] > [New Thread 0x2f5be470 (LWP 6237)] > [New Thread 0x4afe9470 (LWP 6238)] > [New Thread 0x2edbe470 (LWP 6239)] > [New Thread 0x2e5be470 (LWP 6240)] > [New Thread 0x2ddbe470 (LWP 6241)] > [New Thread 0x2d5be470 (LWP 6243)] > [New Thread 0x2cdbe470 (LWP 6244)] > [Thread 0x442f5470 (LWP 6198) exited] > [Thread 0x4f948470 (LWP 6173) exited] > [Thread 0x3e2f5470 (LWP 6210) exited] > [Thread 0x35ef7470 (LWP 6228) exited] > [Thread 0x452f5470 (LWP 6196) exited] > [Thread 0x51d5c470 (LWP 6171) exited] > [Thread 0x462f5470 (LWP 6194) exited] > [Thread 0x2fdbe470 (LWP 6236) exited] > [Thread 0x2edbe470 (LWP 6239) exited] > [Thread 0x356f7470 (LWP 6229) exited] > [Thread 0x482f5470 (LWP 6190) exited] > [Thread 0x45af5470 (LWP 6195) exited] > [Thread 0x4bff9470 (LWP 6232) exited] > [Thread 0x36eff470 (LWP 6227) exited] > [Thread 0x2ddbe470 (LWP 6241) exited] > [Thread 0x4afe9470 (LWP 6238) exited] > [Thread 0x305be470 (LWP 6235) exited] > [Thread 0x5045b470 (LWP 6184) exited] > [Thread 0x339d2470 (LWP 6225) exited] > [Thread 0x3baf5470 (LWP 6215) exited] > [Thread 0x47af5470 (LWP 6191) exited] > [Thread 0x3faf5470 (LWP 6207) exited] > [Thread 0x3d2f5470 (LWP 6212) exited] > [Thread 0x349f2470 (LWP 6234) exited] > [Thread 0x46af5470 (LWP 6193) exited] > [Thread 0x76c27470 (LWP 6168) exited] > [Thread 0x412f5470 (LWP 6204) exited] > [Thread 0x49af5470 (LWP 6187) exited] > [Thread 0x432f5470 (LWP 6200) exited] > [Thread 0x4f148470 (LWP 6185) exited] > [Thread 0x472f5470 (LWP 6192) exited] > [Thread 0x422f5470 (LWP 6202) exited] > [Thread 0x5145b470 (LWP 6172) exited] > [Thread 0x3b2f5470 (LWP 6216) exited] > [Thread 0x43af5470 (LWP 6199) exited] > [Thread 0x2e5be470 (LWP 6240) exited] > [Thread 0x366f7470 (LWP 6224) exited] > [Thread 0x378ff470 (LWP 6223) exited] > [Thread 0x392f5470 (LWP 6220) exited] > [Thread 0x331d2470 (LWP 6226) exited] > [Thread 0x402f5470 (LWP 6206) exited] > [Thread 0x3f2f5470 (LWP 6208) exited] > [Thread 0x50c5b470 (LWP 6178) exited] > [Thread 0x3caf5470 (LWP 6213) exited] > [Thread 0x2f5be470 (LWP 6237) exited] > [Thread 0x3eaf5470 (LWP 6209) exited] > [Thread 0x3aaf5470 (LWP 6217) exited] > [Thread 0x48af5470 (LWP 6189) exited] > [Thread 0x2cdbe470 (LWP 6244) exited] > [Thread 0x3daf5470 (LWP 6211) exited] > [Thread 0x380ff470 (LWP 6222) exited] > [Thread 0x3c2f5470 (LWP 6214) exited] > [Thread 0x38af5470 (LWP 6221) exited] > [Thread 0x329d2470 (LWP 6230) exited] > [Thread 0x2d5be470 (LWP 6243) exited] > [Thread 0x44af5470 (LWP 6197) exited] > [Thread 0x39af5470 (LWP 6219) exited] > [Thread 0x42af5470 (LWP 6201) exited] > [Thread 0x41af5470 (LWP 6203) exited] > [Thread 0x321d2470 (LWP 6231) exited] > [Thread 0x3a2f5470 (LWP 6218) exited] > [Thread 0x4e6ff470 (LWP 6186) exited] > [Thread 0x40af5470 (LWP 6205) exited] > [Thread 0x492f5470 (LWP 6188) exited] >=20 > and then it simply exits, gdb including :( That's weird but it should coredump on SIGABRT if you set ulimit -c unlimited. That way you can inspect the coredump with gdb post-mortem. Stefan