qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: pbonzini@redhat.com, kevin@koconnor.net, seabios@seabios.org,
	Gerd Hoffmann <kraxel@redhat.com>,
	qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH v2] map 64-bit PCI BARs at location provided by emulator
Date: Tue, 15 Oct 2013 12:16:43 +0300	[thread overview]
Message-ID: <20131015091643.GA5658@redhat.com> (raw)
In-Reply-To: <20131015110548.167dce45@nial.usersys.redhat.com>

On Tue, Oct 15, 2013 at 11:05:48AM +0200, Igor Mammedov wrote:
> On Tue, 15 Oct 2013 10:01:01 +0200
> Gerd Hoffmann <kraxel@redhat.com> wrote:
> 
> >   Hi,
> > 
> > > Yes but at the cost of overspecifying it.
> > > I think it's down to the name: it's called pcimem64-start
> > > but it can actually be less than 4G and we need to worry what to
> > > do then. Also, 64 doesn't really mean >4G.
> > > 
> > > So how about "reserve-memory-over-4g"?
> > > bios then does 1ull << 32 + reserve-memory-over-4g
> > > to figure out how much to skip.
> > 
> > We are reaching the point where it becomes pointless bikeshedding ...
> > 
> > I want a interface which is clearly defined and which doesn't break if
> > the way we use the address space above 4g changes (hotplug,
> > non-contignous memory, whatever).  So make it depend on the memory
> > deployed isn't a clever idea.
> > 
> > So at the end of the day it comes down to specify an address, either
> > relative to 4g (your reserve-memory-over-4g suggestion) or relative to
> > zero (Igors pcimem64-start patch).  Both will do the job.  In both cases
> > the bios has to check it has no conflicts with known ram regions (i.e.
> > compare against 1<<32 + RamSizeAbove4G).
> > 
> > I personally don't see the point in having the address relative to 4g
> > and prefer the pcimem64-start approach.  We could rename it to
> > pcimem64-minimum-address to make more clear this is about keeping some
> > space free rather than specifyng a fixed address where the 64bit pci
> > bars should be mapped to.  But at the end of the day I don't care too
> > much, how we are going to name the baby is just a matter of taste and
> > not really critical for the interface ...
> Michael,
> 
> My preference is the same as Gerd's.
> Though if you NACK this approach, I'm fine with relative to 4g approach
> as you suggest, the only change I'd like to see in naming is memory
> reservation to be replaced with pcimem64, i.e. something like:
>  pcimem64-4gb-offset
> to reflect value we are actually passing in.

I'm not going to nack.

> > 
> > What is the state of the qemu side patches btw?
> I've them ready but they conflict with you 1Tb in e820 RFC,
> I can post relevant patches as soon as we agree on this topic.
> May I pick up your patch and post it along with pcimem64-start patches?

So for qemu we really need to merge them together with
memory hotplug I think.  It's not a big patch correct?
If it's small there's no need to merge just this interface
first, let's merge it all together.

> > 
> > cheers,
> >   Gerd
> > 
> > 
> > 

  parent reply	other threads:[~2013-10-15  9:14 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-10-13 12:13 [Qemu-devel] [PATCH v2] map 64-bit PCI BARs at location provided by emulator Igor Mammedov
2013-10-13 12:31 ` Michael S. Tsirkin
2013-10-13 15:11   ` Igor Mammedov
2013-10-13 15:59     ` Michael S. Tsirkin
2013-10-13 16:23       ` Igor Mammedov
2013-10-13 16:46         ` Michael S. Tsirkin
2013-10-13 17:33           ` Igor Mammedov
2013-10-13 18:19             ` [Qemu-devel] [SeaBIOS] " Igor Mammedov
2013-10-13 19:53               ` Kevin O'Connor
2013-10-14  8:01                 ` Gerd Hoffmann
2013-10-13 20:28             ` [Qemu-devel] " Michael S. Tsirkin
2013-10-14 10:27               ` Igor Mammedov
2013-10-14 11:00                 ` Michael S. Tsirkin
2013-10-14 12:16                   ` Gerd Hoffmann
2013-10-14 12:38                     ` Michael S. Tsirkin
2013-10-14 13:04                       ` Gerd Hoffmann
2013-10-14 14:00                         ` Michael S. Tsirkin
2013-10-14 16:15                           ` Igor Mammedov
2013-10-14 16:37                             ` Michael S. Tsirkin
2013-10-15  8:01                           ` Gerd Hoffmann
2013-10-15  9:05                             ` Igor Mammedov
2013-10-15  9:14                               ` [Qemu-devel] [SeaBIOS] " Gerd Hoffmann
2013-10-15 12:36                                 ` Igor Mammedov
2013-10-15  9:16                               ` Michael S. Tsirkin [this message]
2013-10-15  9:24                                 ` [Qemu-devel] " Gerd Hoffmann
2013-10-15  9:53                                   ` Igor Mammedov
2013-10-15  9:47                                 ` Igor Mammedov
2013-10-15  9:08                             ` Michael S. Tsirkin
2013-10-14 12:28                   ` Igor Mammedov
2013-10-13 15:15   ` Kevin O'Connor

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131015091643.GA5658@redhat.com \
    --to=mst@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=kevin@koconnor.net \
    --cc=kraxel@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=seabios@seabios.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).