xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Olaf Hering <olaf@aepfle.de>
To: zhen shi <bickys1986@gmail.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: some questions of IO ring in xenpaging
Date: Thu, 1 Sep 2011 12:15:25 +0200	[thread overview]
Message-ID: <20110901101525.GA3258@aepfle.de> (raw)
In-Reply-To: <CACavRyA8dgzpAc3X3-2DkBTSD3FtxLnpm5O0k5NV7m=nGydFVQ@mail.gmail.com>

On Thu, Sep 01, zhen shi wrote:

> Hi Olaf --
> 
> I have two questions to ask you about xenpaging.
> 1) When guest os causes page_fault for the accessed page is paging_out
> or paged, it will execute p2m_mem_paging_populate(). and in
> p2m_mem_paging_populate() it will first check if the ring is full.
> when I ran with domU suse11 4G memory and 8vcpus, I found there will
> be a corruption in checking the ring.
> For example, if 4vcpus are met with page faults when they access
> different pages, and there is only four free-requests for the ring.
> and then they call p2m_mem_paging_populate(),and execute
> mem_event_check_ring(d) at the same time.All will find ring is not
> full,and will fill the requests. It will cause the latter request to
> cover the front request.
> and I think there should a lock before the mem_event_check_ring(d),
> and normally it unlock after mem_event_put_request(d, &req).
> You can review the attached doc of xenpaging_IO_ring.txt to see if my
> opnion is right.

Yes, you are right.
I think mem_event_check_ring() should reserve a reference, and
mem_event_put_request() should use that reference.
mem_sharing_alloc_page() even has a comment that this should be done.


> 2) mem_sharing and xenpaging are shared with one IO ring for domU. In
> the function of mem_sharing_alloc_page(), if alloc_domheap_page(d, 0)
> returns NULL, then it will pause VCPU, check if the ring is full, and
> fill the request at last.
> I think there is also a corruption of mem_event_check_ring(d) with it
> in p2m_mem_paging_populate(). We should assure exclusively in reading
> the free_request and puting requests.  What's more, although it hardly
> fails in alloc_domheap_page(d, 0) from mem_sharing_alloc_page(), it
> will fill the requests in IO ring.  But in xenpaging when handling the
> page_in requests, we have not distinguished the requests with flag
> "MEM_EVENT_FLAG_VCPU_PAUSED" from paging or sharing. It will cause if
> the request is from mem_sharing_alloc_page(), it will go to
> p2m_mem_paging_resume() at last, and the page's p2mt is p2m_ram_rw. I
> think this is wrong. Maybe we should add the req.type when page in.

Yes, get_request() in xenpaging should check the type before popping the
request from the ring. Perhaps memsharing and xenpaging should use its
own rings.

Olaf

  reply	other threads:[~2011-09-01 10:15 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-08-31 17:02 some questions of IO ring in xenpaging zhen shi
2011-09-01 10:15 ` Olaf Hering [this message]
2011-09-05 11:31   ` Olaf Hering

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110901101525.GA3258@aepfle.de \
    --to=olaf@aepfle.de \
    --cc=bickys1986@gmail.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).