xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Cc: Nate Studer <nate.studer@dornerworks.com>, xen-devel@lists.xen.org
Subject: Re: Strange interdependace between domains
Date: Thu, 13 Feb 2014 18:36:55 +0100	[thread overview]
Message-ID: <1392313015.32038.112.camel@Solace> (raw)
In-Reply-To: <1646915994.20140213165604@gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 5440 bytes --]

On gio, 2014-02-13 at 16:56 +0000, Simon Martin wrote:
> Hi all,
> 
Hey Simon!

First of all, as you're using ARINC, I'm adding Nate, as he's ARINC's
maintainer, let's see if he can help us! ;-P

> I  am  now successfully running my little operating system inside Xen.
> It  is  fully  preemptive and working a treat, 
>
Aha, this is great! :-)

> but I have just noticed
> something  I  wasn't expecting, and will really be a problem for me if
> I can't work around it.
> 
Well, let's see...

> My configuration is as follows:
> 
> 1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.
> 
> 2.- Xen: 4.4 (just pulled from repository)
> 
> 3.- Dom0: Debian Wheezy (Kernel 3.2)
> 
> 4.- 2 cpu pools:
> 
> # xl cpupool-list
> Name               CPUs   Sched     Active   Domain count
> Pool-0               3    credit       y          2
> pv499                1  arinc653       y          1
> 
Ok, I think I figured this out from the other information, but it would
be useful to know what pcpus are assigned to what cpupool. I think it's
`xl cpupool-list -c'.

> 5.- 2 domU:
> 
> # xl list
> Name                                        ID   Mem VCPUs      State   Time(s)
> Domain-0                                     0   984     3     r-----      39.7
> win7x64                                      1  2046     3     -b----     143.0
> pv499                                        3   128     1     -b----      61.2
> 
> 6.- All VCPUs are pinned:
> 
Right, although, if you use cpupools, and if I've understood what you're
up to, you really should not require pinning. I mean, the isolation
between the RT-ish domain and the rest of the world should be already in
place thanks to cpupools.

Actually, pinning can help, but meybe not in the exact way you're using
it...

> # xl vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
> Domain-0                             0     0    0   -b-      27.5  0
> Domain-0                             0     1    1   -b-       7.2  1
> Domain-0                             0     2    2   r--       5.1  2
> win7x64                              1     0    0   -b-      71.6  0
> win7x64                              1     1    1   -b-      37.7  1
> win7x64                              1     2    2   -b-      34.5  2
> pv499                                3     0    3   -b-      62.1  3
> 
...as it can be seen here.

So, if you ask me, you're restricting too much things in pool-0, where
dom0 and the Windows VM runs. In fact, is there a specific reason why
you need all their vcpus to be statically pinned each one to only one
pcpu? If not, I'd leave them a little bit more of freedom.

What I'd try is:
 1. all dom0 and win7 vcpus free, so no pinning in pool0.
 2. pinning as follows:
     * all vcpus of win7 --> pcpus 1,2
     * all vcpus of dom0 --> no pinning
   this way, what you get is the following: win7 could suffer sometimes,
   if all its 3 vcpus gets busy, but that, I think is acceptable, at
   least up to a certain extent, is that the case?
   At the same time, you
   are making sure dom0 always has a chance to run, as pcpu#0 would be
   his exclusive playground, in case someone, including your pv499
   domain, needs its services.

> 7.- pv499 is the domU that I am testing. It has no disk or vif devices
> (yet). I am running a little test program in pv499 and the timing I
> see is varies depending on disk activity.
> 
> My test program runs prints up the time taken in milliseconds for a
> million cycles. With no disk activity I see 940 ms, with disk activity
> I see 1200 ms.
> 
Wow, it's very hard to tell. What I first thought is that your domain
may need something from dom0, and the suboptimal (IMHO) pinning
configuration you're using could be slowing that down. The bug in this
theory is that dom0 services are mostly PV drivers for disk and network,
which you say you don't have...

I still think your pinning setup is unnecessary restrictive, so I'd give
it a try, but it's probably not the root cause of your issue.

> I can't understand this as disk activity should be running on cores 0,
> 1  and 2, but never on core 3. The only thing running on core 3 should
> by my paravirtual machine and the hypervisor stub.
> 
Right. Are you familiar with tracing what happens inside Xen with
xentrace and, perhaps, xenalyze? It takes a bit of time to get used to
it but, once you dominate it, it is a good mean for getting out really
useful info!

There is a blog post about that here:
http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and-xenalyze/
and it should have most of the info, or the links to where to find them.

It's going to be a lot of data, but if you trace one run without disk IO
and one run with disk IO, it should be doable to compare the
differences, for instance, in terms of when the vcpus of your domain are
active, as well as when they get scheduled, and from that we hopefully
can try to narrow down a bit more the real root cause of the thing.

Let us know if you think you need help with that.

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  parent reply	other threads:[~2014-02-13 17:36 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-13 16:56 Strange interdependace between domains Simon Martin
2014-02-13 17:07 ` Ian Campbell
2014-02-13 17:28   ` Simon Martin
2014-02-13 17:39     ` Dario Faggioli
2014-02-13 17:36 ` Dario Faggioli [this message]
2014-02-13 20:47   ` Nate Studer
2014-02-13 22:25   ` Simon Martin
2014-02-13 23:13     ` Dario Faggioli
2014-02-14 10:26       ` Don Slutz
2014-02-14 12:02     ` Simon Martin
2014-02-14 13:26       ` Andrew Cooper
2014-02-14 17:21       ` Dario Faggioli
2014-02-17 12:46         ` Simon Martin
2014-02-18 16:55           ` Dario Faggioli
2014-02-18 17:58             ` Don Slutz
2014-02-18 18:06               ` Dario Faggioli
2014-02-20  6:07                 ` Juergen Gross
2014-02-20 18:22                   ` Dario Faggioli
2014-02-21  6:31                     ` Juergen Gross
2014-02-21 17:24                       ` Dario Faggioli
2014-02-24  9:25                         ` Juergen Gross
2014-02-17 13:19         ` Juergen Gross
2014-02-17 15:08           ` Dario Faggioli
2014-02-18  5:31             ` Juergen Gross
2014-02-17 14:13         ` Nate Studer
2014-02-18 16:47           ` Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1392313015.32038.112.camel@Solace \
    --to=dario.faggioli@citrix.com \
    --cc=furryfuttock@gmail.com \
    --cc=nate.studer@dornerworks.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).