xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Keir Fraser <keir@xen.org>
To: "G.R." <firemeteor@users.sourceforge.net>
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH] hvmloader / qemu-xen: Getting rid of resource conflict for OpRegion.
Date: Thu, 20 Dec 2012 14:19:22 +0000	[thread overview]
Message-ID: <CCF8CEEA.561B5%keir@xen.org> (raw)
In-Reply-To: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>

On 20/12/2012 13:31, "G.R." <firemeteor@users.sourceforge.net> wrote:

> If concern is about security, the same argument should apply to the
> first page (the portion before the page offset).
> The problem is that I have no idea what is around the mapped page. Not
> sure who has the knowledge.

Well we can't do better than mapping some whole number of pages, really.
Unless we trap to qemu on every access. I don't think we'd go there unless
there really were a known security issue. But mapping only the exact number
of pages we definitely need is a good principle.

> What's the standard flow to handle such map with offset?
> I expect this to be a common case, since the ioremap function in linux
> kernel accept this.

map_size = ((host_opregion & 0xfff) + 8096 + 0xfff) >> 12

Possibly with suitable macros used instead of magic numbers (e.g., XC_PAGE_*
and a macro for the opregion size).

 -- Keir

  reply	other threads:[~2012-12-20 14:19 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-12-20  3:52 [PATCH] hvmloader / qemu-xen: Getting rid of resource conflict for OpRegion G.R.
2012-12-20  3:56 ` G.R.
2012-12-20 10:41   ` Ian Campbell
2012-12-20 13:03     ` Keir Fraser
2012-12-20 13:31       ` G.R.
2012-12-20 14:19         ` Keir Fraser [this message]
2012-12-20 15:06           ` G.R.
2012-12-20 18:27             ` Ross Philipson
2012-12-21  3:59               ` G.R.
2012-12-21 15:55                 ` Ross Philipson
2012-12-21 16:49                   ` G.R.
2012-12-21 17:03                     ` Ross Philipson
2012-12-21 17:26                       ` Ross Philipson
2012-12-23  6:11                         ` G.R.
2013-01-02 16:34                           ` Ross Philipson
2013-01-04  7:25                             ` G.R.
2013-01-09 15:34                             ` G.R.
2013-01-09 16:36                               ` Ross Philipson
2013-01-10 10:27                                 ` G.R.
2013-01-10 13:40                                   ` Ross Philipson
2013-01-10 16:29                                     ` G.R.
2013-01-14 16:01                                       ` Ross Philipson
2013-01-15 16:44                                         ` G.R.
2012-12-20 19:44             ` Jean Guyader
2012-12-20 19:50       ` Ian Campbell
2012-12-21  3:51         ` G.R.

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CCF8CEEA.561B5%keir@xen.org \
    --to=keir@xen.org \
    --cc=Ian.Campbell@citrix.com \
    --cc=Jean.guyader@gmail.com \
    --cc=firemeteor@users.sourceforge.net \
    --cc=stefano.stabellini@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).