xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Liu.yi" <liu.yi24@zte.com.cn>
To: xen-devel@lists.xensource.com
Subject: credit scheduler svc->flags access race?
Date: Fri, 30 Dec 2011 18:55:25 -0800 (PST)	[thread overview]
Message-ID: <1325300125668-5111504.post@n5.nabble.com> (raw)

hi,all
In my HP DL380G7 server, csched_vcpu_yield and csched_acct may access
svc->flags at the same time, when this happens vms stop running because
csched_vcpu_yield overwrites svc->flags which csched_acct set to
CSCHED_FLAG_VCPU_PARKED (My vm's schedule cap values is not 0).
Vms run fine if I modified schedule_credit.c as follows:

--- xen/common/sched_credit.c   2010-12-10 10:19:45.000000000 +0800
+++ ../../xen-4.1.0/xen/common/sched_credit.c   2010-12-31
10:47:39.000000000 +0800
@@ -135,7 +135,7 @@ struct csched_vcpu {
     struct vcpu *vcpu;
     atomic_t credit;
     s_time_t start_time;   /* When we were scheduled (used for credit) */
-    uint16_t flags;
+    uint32_t flags;
     int16_t pri;
 #ifdef CSCHED_STATS
     struct {
@@ -787,7 +787,7 @@ csched_vcpu_yield(const struct scheduler
     if ( !sched_credit_default_yield )
     {
         /* Let the scheduler know that this vcpu is trying to yield */
-        sv->flags |= CSCHED_FLAG_VCPU_YIELD;
+        set_bit(1, &sv->flags);
     }
 }
 
@@ -1086,7 +1086,7 @@ csched_acct(void* dummy)
                 {
                     CSCHED_STAT_CRANK(vcpu_park);
                     vcpu_pause_nosync(svc->vcpu);
-                    svc->flags |= CSCHED_FLAG_VCPU_PARKED;
+                    set_bit(0, &svc->flags);
                 }
 
                 /* Lower bound on credits */
@@ -1111,7 +1111,7 @@ csched_acct(void* dummy)
                      */
                     CSCHED_STAT_CRANK(vcpu_unpark);
                     vcpu_unpause(svc->vcpu);
-                    svc->flags &= ~CSCHED_FLAG_VCPU_PARKED;
+                    clear_bit(0, &svc->flags);
                 }
 
                 /* Upper bound on credits means VCPU stops earning */
@@ -1337,7 +1337,7 @@ csched_schedule(
      * Clear YIELD flag before scheduling out
      */
     if ( scurr->flags & CSCHED_FLAG_VCPU_YIELD )
-        scurr->flags &= ~(CSCHED_FLAG_VCPU_YIELD);
+        clear_bit(1, &scurr->flags);

Are these modification correct? Another interesting thing is that vms run
fine on other server even if these vms's credit cap value is not 0. I don't
know why.
My xen version is 4.1.0, dom0 kernel version is 2.6.32.41 from jerry's git a
few months before.

Thanks

liuyi


--
View this message in context: http://xen.1045712.n5.nabble.com/credit-scheduler-svc-flags-access-race-tp5111504p5111504.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

             reply	other threads:[~2011-12-31  2:55 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-12-31  2:55 Liu.yi [this message]
2012-01-03  7:33 ` credit scheduler svc->flags access race? Keir Fraser
2012-01-04 23:02 ` George Dunlap
2012-01-05  2:36 ` Liu.yi
2012-01-05  7:58   ` Jan Beulich
2012-01-05  8:43     ` Liu.yi
  -- strict thread matches above, loose matches on Subject: below --
2011-12-29  7:50 Liu,Yi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1325300125668-5111504.post@n5.nabble.com \
    --to=liu.yi24@zte.com.cn \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).