From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=38590 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Q4xyX-00023w-Dm for qemu-devel@nongnu.org; Wed, 30 Mar 2011 12:10:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Q4xyS-0007ZB-Kp for qemu-devel@nongnu.org; Wed, 30 Mar 2011 12:10:25 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43416) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Q4xyS-0007Z0-6S for qemu-devel@nongnu.org; Wed, 30 Mar 2011 12:10:24 -0400 Date: Wed, 30 Mar 2011 18:09:53 +0200 From: "Michael S. Tsirkin" Subject: Re: [Qemu-devel] [PATCH 3/3] vhost: roll our own cpu map variant Message-ID: <20110330160953.GB26439@redhat.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: kwolf@redhat.com, gleb@redhat.com, Jes Sorensen , Jason Wang , qemu-devel@nongnu.org, armbru@redhat.com, Christoph Hellwig , Alex Williamson , Amit Shah On Tue, Mar 29, 2011 at 11:53:54AM +0100, Stefan Hajnoczi wrote: > On Mon, Mar 28, 2011 at 10:14 PM, Michael S. Tsirkin wrote: > > vhost used cpu_physical_memory_map to get the > > virtual address for the ring, however, > > this will exit on an illegal RAM address. > > Since the addresses are guest-controlled, we > > shouldn't do that. > > > > Switch to our own variant that uses the vhost > > tables and returns an error instead of exiting. > > We should make all of QEMU more robust instead of just vhost. Perhaps > introduce cpu_physical_memory_map_nofail(...) that aborts like the > current cpu_physical_memory_map() implementation and then make non-hw/ > users call that one. hw/ users should check for failure. > > Stefan Yea, well ... at least vhost-net wants to also check it is given a ram address, not some other physical address. We could generally replace the memory management in vhost-net by some other logic, when that's done this one can go away as well. -- MST