From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=55116 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Oam15-0001E3-V7 for qemu-devel@nongnu.org; Mon, 19 Jul 2010 04:48:05 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1Oam14-0006yX-M1 for qemu-devel@nongnu.org; Mon, 19 Jul 2010 04:48:03 -0400 Received: from mx1.redhat.com ([209.132.183.28]:18394) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1Oam14-0006yM-F3 for qemu-devel@nongnu.org; Mon, 19 Jul 2010 04:48:02 -0400 Date: Mon, 19 Jul 2010 11:48:00 +0300 From: Gleb Natapov Subject: Re: [Qemu-devel] Question about qemu firmware configuration (fw_cfg) device Message-ID: <20100719084800.GI4689@redhat.com> References: <20100719073312.GY4689@redhat.com> <4E9BBBA5-F2D1-4485-AFD3-8D6FDE3A3CCC@suse.de> <20100719075110.GB4689@redhat.com> <77A267F6-3646-4F22-B837-E1E7DBA06950@suse.de> <20100719080142.GE4689@redhat.com> <20100719081954.GF4689@redhat.com> <43B9EAA8-E3F5-4903-896C-DEBD90E06162@suse.de> <20100719083050.GG4689@redhat.com> <51B0CBDB-E40B-4A14-A33E-34E13B9BF3CB@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <51B0CBDB-E40B-4A14-A33E-34E13B9BF3CB@suse.de> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alexander Graf Cc: "Richard W.M. Jones" , qemu-devel@nongnu.org On Mon, Jul 19, 2010 at 10:41:48AM +0200, Alexander Graf wrote: > > On 19.07.2010, at 10:30, Gleb Natapov wrote: > > > On Mon, Jul 19, 2010 at 10:24:46AM +0200, Alexander Graf wrote: > >> > >> On 19.07.2010, at 10:19, Gleb Natapov wrote: > >> > >> Yes and no. It sounds nice at first, but doesn't quite fit. There are two issues: > >> > >> 1) We need a new PCI ID > > We have our range. We can allocate from there. > > > >> 2) There can be a lot of initrd binaries with multiboot. We only have a limited amount of BARs > >> > > Is it supported now with fw_cfg interface? My main concern with this > > approach is huge BAR size that may take a lot of space from PCI MMIO range > > if guest OS decide to configure it. > > Oh, right. I think I combined all the modules into the INITRD blob. Yeah, that would work. Is coalesced MMIO more efficient than coalesced PIO? Or do we have to do some RAM mapping for those special BAR regions? > I think we will have to do RAM mapping. Otherwise it may be slow to. Coalesced MMIO is for write not read IIRC. > Were there DMA capable devices back in ISA times? There must be. If so, we can just take a look at what they do and do it similarly. Bus mastering was a new thing for PCI, right? > I think IDE can be considered DMA capable ISA device, no? At least it works by writing to PIO ports and getting result into memory, but with interrupts and status bits and everything that real device should have. On board DMA engine is also ISA device. -- Gleb.