From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55999) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bIfW9-0001U7-Vj for qemu-devel@nongnu.org; Thu, 30 Jun 2016 13:12:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bIfW4-0004c4-Lo for qemu-devel@nongnu.org; Thu, 30 Jun 2016 13:12:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:33950) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bIfW4-0004bE-CE for qemu-devel@nongnu.org; Thu, 30 Jun 2016 13:12:40 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 48E47C05B1E4 for ; Thu, 30 Jun 2016 17:12:39 +0000 (UTC) Date: Thu, 30 Jun 2016 18:12:34 +0100 From: "Dr. David Alan Gilbert" Message-ID: <20160630171233.GE2683@work-vm> References: <9b76415a-23e6-3ded-4dbc-42838cc164b0@redhat.com> <20160622142414.GI30202@redhat.com> <20160623014216-mutt-send-email-mst@redhat.com> <20160622232308.GQ30202@redhat.com> <20160623024400-mutt-send-email-mst@redhat.com> <1466671203.26189.35.camel@redhat.com> <20160629164252.GD10488@work-vm> <1467267046.15123.94.camel@redhat.com> <20160630105908.GA2683@work-vm> <1467303289.15123.102.camel@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1467303289.15123.102.camel@redhat.com> Subject: Re: [Qemu-devel] Default for phys-addr-bits? (was Re: [PATCH 4/5] x86: Allow physical address bits to be set) List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Gerd Hoffmann Cc: "Michael S. Tsirkin" , Andrea Arcangeli , Marcel Apfelbaum , Paolo Bonzini , qemu-devel@nongnu.org, Eduardo Habkost * Gerd Hoffmann (kraxel@redhat.com) wrote: > > So that's mapped at an address beyond host phys-bits. > > And it hasn't failed/crashed etc - but I guess maybe nothing is using that 2G space? > > root@fedora ~# dmesg | grep Surface > [ 4.830095] [drm] qxl: 2048M of Surface memory size > > qxl bar 4 (64bit) and qxl bar 1 (32bit) are the same thing. The 64bit > bar can be alot larger obviously. The 32bit bar is just an alias for > the first portion of the 64bit bar. So I guess qxl just falls back to > use bar 1 instead of bar 4 because ioremap() on bar 4 fails. Hmm for me it's saying it mapped 64M on the setup with 64T maxmem and 48bit phys-bits; even though the bar is showing it as OK; how is the guest ioremap detecting a problem? > > Obviously 128T is a bit silly for maxmem at the moment, however I was worrying what > > happens with 36/39/40bit hosts, and it's not unusual to pick a maxmem that's a few TB > > even if the VMs you're initially creating are only a handful of GB. (oVirt/RHEV seems to use > > a 4TB default for maxmem). > > Oh, ok. Should be fixed I guess. > > > Still, this only hits as a problem if you hit the combination of: > > a) You use large PCI bars > > ovmf will map all 64bit bars high, even without running out of 32bit > address space. And with virtio 1.0 pretty much every virtual machine > will have 64bit bars. Hmm OK let me think about this; using current BIOS+old virtio+host phys-bits on a 36bit host, with maxmem=4T would apparently work because nothing would actually get mapped that high. The same with OVMF would work as well, because generally you wouldn't have any 64bit bars; but then you turn on virtio 1.0 and.. well then what happens? The guest sees it's 36bit phys-address bits so linux probably drops the bars associated with the virtio? Hmm. With current downstream qemu's 40bit physical address bit you'd get those bar's mapped; so it might break badly - except if we can figure out what causes my 2GB qxl bar not to happen as at the start of this message. Dave > cheers, > Gerd > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK