public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: paulmck@linux.vnet.ibm.com
Cc: Avi Kivity <avi@redhat.com>, Oleg Nesterov <oleg@redhat.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	KVM list <kvm@vger.kernel.org>
Subject: Re: [RFC][PATCH] srcu: Implement call_srcu()
Date: Wed, 01 Feb 2012 11:22:29 +0100	[thread overview]
Message-ID: <1328091749.2760.34.camel@laptop> (raw)
In-Reply-To: <20120131222447.GH2391@linux.vnet.ibm.com>

On Tue, 2012-01-31 at 14:24 -0800, Paul E. McKenney wrote:

> > > Can we get it back to speed by scheduling a work function on all cpus? 
> > > wouldn't that force a quiescent state and allow call_srcu() to fire?
> > > 
> > > In kvm's use case synchronize_srcu_expedited() is usually called when no
> > > thread is in a critical section, so we don't have to wait for anything
> > > except the srcu machinery.
> > 
> > OK, I'll try and come up with means of making it go fast again ;-)
> 
> I cannot resist suggesting a kthread to do the call_srcu(), which
> would allow synchronize_srcu_expedited() to proceed with its current
> brute-force speed.

Right, so I really don't like to add a kthread per srcu instance.
Sharing a kthread between all SRCUs will be problematic since these sync
things can take forever and so the thread will become a bottlneck.

Also, I'd really like to come up with a better means of sync for SRCU
and not hammer the entire machine (3 times).

One of the things I was thinking of is adding a sequence counter in the
per-cpu data. Using that we could do something like:

  unsigned int seq1 = 0, seq2 = 0, count = 0;
  int cpu, idx;

  idx = ACCESS_ONCE(sp->completions) & 1;

  for_each_possible_cpu(cpu)
	seq1 += per_cpu(sp->per_cpu_ref, cpu)->seq;

  for_each_possible_cpu(cpu)
	count += per_cpu(sp->per_cpu_ref, cpu)->c[idx];

  for_each_possible_cpu(cpu)
	seq2 += per_cpu(sp->per_cpu_ref, cpu)->seq;

  /*
   * there's no active references and no activity, we pass
   */
  if (seq1 == seq2 && count == 0)
	return;

  synchronize_srcu_slow();


This would add a fast-path which should catch the case Avi outlined
where we call sync_srcu() when there's no other SRCU activity.

The other thing I was hoping to be able to pull off is add a copy of idx
into the same cacheline as c[] and abuse cache-coherency to avoid some
of the sync_sched() calls, but that's currently hurting my brain.

  reply	other threads:[~2012-02-01 10:22 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1328016724.2446.229.camel@twins>
2012-01-31 13:47 ` [RFC][PATCH] srcu: Implement call_srcu() Avi Kivity
2012-01-31 13:50   ` Peter Zijlstra
2012-01-31 22:24     ` Paul E. McKenney
2012-02-01 10:22       ` Peter Zijlstra [this message]
2012-02-01 10:44         ` Avi Kivity
2012-02-01 10:49           ` Avi Kivity
2012-02-01 11:00             ` Takuya Yoshikawa
2012-02-01 11:01               ` Avi Kivity
2012-02-01 11:12                 ` Takuya Yoshikawa
2012-02-01 13:24                   ` Avi Kivity
2012-02-02  5:46                     ` [test result] dirty logging without srcu update -- " Takuya Yoshikawa
2012-02-02 10:10                       ` Avi Kivity
2012-02-02 10:21                         ` Takuya Yoshikawa
2012-02-02 10:21                           ` Avi Kivity
2012-02-02 10:40                             ` Takuya Yoshikawa
2012-02-02 11:02                               ` Avi Kivity
2012-02-02 14:44                                 ` Takuya Yoshikawa
2012-02-02 14:57                                   ` Avi Kivity
2012-02-01 13:43                 ` Marcelo Tosatti
2012-02-01 15:42                   ` Takuya Yoshikawa
2012-02-01 13:50             ` Marcelo Tosatti
2012-02-08 15:43               ` [RFC] need to improve slot creation/destruction? -- " Takuya Yoshikawa
2012-02-08 18:45                 ` Marcelo Tosatti
2012-02-09 13:48                   ` Takuya Yoshikawa
2012-02-09 14:25                   ` Avi Kivity
2012-02-10 17:16                     ` Marcelo Tosatti
2012-02-14  9:52                       ` Avi Kivity
2012-02-09 14:23                 ` Avi Kivity
2012-02-09 14:24                   ` Avi Kivity
2012-02-10 13:08                     ` Takuya Yoshikawa
2012-02-10 17:17                       ` Marcelo Tosatti
2012-02-10 13:25                   ` Takuya Yoshikawa
2012-02-14  9:52                     ` Avi Kivity
2012-02-01 14:07         ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1328091749.2760.34.camel@laptop \
    --to=peterz@infradead.org \
    --cc=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox