xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Domain relinquish resources racing with p2m access
@ 2012-02-01 20:49 Andres Lagar-Cavilla
  2012-02-02 13:34 ` Tim Deegan
  0 siblings, 1 reply; 3+ messages in thread
From: Andres Lagar-Cavilla @ 2012-02-01 20:49 UTC (permalink / raw)
  To: xen-devel, tim, keir

So we've run into this interesting (race?) condition while doing
stress-testing. We pummel the domain with paging, sharing and mmap
operations from dom0, and concurrently we launch a domain destruction.
Often we get in the logs something along these lines

(XEN) mm.c:958:d0 Error getting mfn 859b1a (pfn ffffffffffffffff) from L1
entry 8000000859b1a625 for l1e_owner=0, pg_owner=1

We're using the synchronized p2m patches just posted, so my analysis is as
follows:

- the domain destroy domctl kicks in. It calls relinquish resources. This
disowns and puts most domain pages, resulting in invalid (0xff...ff) m2p
entries

- In parallel, a do_mmu_update is making progress, it has no issues
performing a p2m lookup because the p2m has not been torn down yet; we
haven't gotten to the RCU callback. Eventually, the mapping fails in
page_get_owner in get_pafe_from_l1e.

The map is failed, as expected, but what makes me uneasy is the fact that
there is a still active p2m lurking around, with seemingly valid
translations to valid mfn's, while all the domain pages are gone.

Is this a race condition? Can this lead to trouble?

Thanks!
Andres

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2012-02-10 18:05 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-02-01 20:49 Domain relinquish resources racing with p2m access Andres Lagar-Cavilla
2012-02-02 13:34 ` Tim Deegan
2012-02-10 18:05   ` Andres Lagar-Cavilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).