xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: misiu godfrey <godfrey@cs.queensu.ca>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: additional domain.c memory allocation causes "xm create" to fail
Date: Tue, 4 Sep 2012 16:53:20 -0400	[thread overview]
Message-ID: <CAMVU=QgrX+n5gHuP+a5UzM1MbYJBv2b5BFJbLLhDwOL+8zgnog@mail.gmail.com> (raw)
In-Reply-To: <5046606E.9080908@citrix.com>


[-- Attachment #1.1: Type: text/plain, Size: 1847 bytes --]

> "flush" is the correct term.
>
> However, the structure of caches work against you.  With a set associative
> cache, you have no control over which of the sets gets used for your cache
> line.  So on an N way set associate cache, your worst case may only dirty
> 1/N of the actual lines in the cache.
>
> After that, your L1 cache inclusion policy is going to affect how you
> dirty your L2 cache, as well as whether you have joined caches or split
> instruction and data caches.
>
> Furthermore, on newer processes, multiple cores may share an L2 or L3
> cache, and context switches are unlike to occur at exactly the same time on
> each core, meaning that a context switch on one core is going to (attempt
> to) nuke the L2 cache of the VM which is in mid run on another core.
> Conversely, even executing the loop trying to dirty the cache will mean
> that you dont get all of it, and having another core executing on the same
> L2 cache means it will pull its data back during your dirtying loop.
>

I have some more robust code that takes account of the set-associativity of
the cache, code that I originally thought was going to be superfluous in
this situation.  Now that I have managed to execute this basic loop I can
address a more complex environment with a set-associative.  Currently, I
don't need to worry about an L3 cache because my test machine has no shared
cache between cores (nothing higher than an L2).  Although I will keep this
in mind, as it will need to be addressed once I get beyond the proof of
concept.

Anyway, validating the xmalloc return value seems to have addressed my
problem, although the log I am printing to seems to imply that xmalloc is
never failing.  I'll look further into it once I get more things working.
 Thanks a lot for your advice, Andrew.  Sorry my problem ended up being so
trivial.

-Misiu

[-- Attachment #1.2: Type: text/html, Size: 2237 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

      reply	other threads:[~2012-09-04 20:53 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-09-04 18:22 additional domain.c memory allocation causes "xm create" to fail misiu godfrey
2012-09-04 18:32 ` Andrew Cooper
2012-09-04 19:45   ` misiu godfrey
2012-09-04 20:11     ` Andrew Cooper
2012-09-04 20:53       ` misiu godfrey [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMVU=QgrX+n5gHuP+a5UzM1MbYJBv2b5BFJbLLhDwOL+8zgnog@mail.gmail.com' \
    --to=godfrey@cs.queensu.ca \
    --cc=andrew.cooper3@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).