linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Takuya Yoshikawa <takuya.yoshikawa@gmail.com>
To: Avi Kivity <avi@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	mingo@elte.hu, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org, kvm@vger.kernel.org,
	mtosatti@redhat.com, yoshikawa.takuya@oss.ntt.co.jp
Subject: Re: [RFC] sched: make callers check lock contention for cond_resched_lock()
Date: Thu, 3 May 2012 23:11:07 +0900	[thread overview]
Message-ID: <20120503231107.e8c5a5dde90e109e570ba32e@gmail.com> (raw)
In-Reply-To: <4FA27E5E.5000002@redhat.com>

On Thu, 03 May 2012 15:47:26 +0300
Avi Kivity <avi@redhat.com> wrote:

> On 05/03/2012 03:29 PM, Peter Zijlstra wrote:
> > On Thu, 2012-05-03 at 21:22 +0900, Takuya Yoshikawa wrote:
> > > Although the real use case is out of this RFC patch, we are now discussing
> > > a case in which we may hold a spin_lock for long time, ms order, depending
> > > on workload;  and in that case, other threads -- VCPU threads -- should be
> > > given higher priority for that problematic lock. 
> >
> > Firstly, if you can hold a lock that long, it shouldn't be a spinlock,
> 
> In fact with your mm preemptibility work it can be made into a mutex, if
> the entire mmu notifier path can be done in task context.  However it
> ends up a strange mutex - you can sleep while holding it but you may not
> allocate, because you might recurse into an mmu notifier again.
> 
> Most uses of the lock only involve tweaking some bits though.

I might find a real way to go.

After your "mmu_lock -- TLB-flush" decoupling, we can change the current
get_dirty work flow like this:

	for ... {
		take mmu_lock
		for 4K*8 gfns {		// with 4KB dirty_bitmap_buffer
			xchg dirty bits	// 64/32 gfns at once
			write protect them
		}
		release mmu_lock
		copy_to_user
	}
	TLB flush

This reduces the size of dirty_bitmap_buffer and does not hold mmu_lock
so long.

I should have think of a way not to hold the spin_lock so long as Peter
said.  My lack of thinking might be the real problem.

Thanks,
	Takuya

  reply	other threads:[~2012-05-03 14:11 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-03  8:12 [RFC] sched: make callers check lock contention for cond_resched_lock() Takuya Yoshikawa
2012-05-03  8:35 ` Peter Zijlstra
2012-05-03 12:22   ` Takuya Yoshikawa
2012-05-03 12:29     ` Peter Zijlstra
2012-05-03 12:47       ` Avi Kivity
2012-05-03 14:11         ` Takuya Yoshikawa [this message]
2012-05-03 14:27           ` Avi Kivity
2012-05-03 14:38             ` Avi Kivity
2012-05-03 13:00       ` Takuya Yoshikawa
2012-05-03 15:47         ` Peter Zijlstra
2012-05-10 22:03           ` Takuya Yoshikawa
2012-05-18  7:26             ` Ingo Molnar
2012-05-18 16:10               ` Takuya Yoshikawa
2012-05-04  2:43         ` Michael Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120503231107.e8c5a5dde90e109e570ba32e@gmail.com \
    --to=takuya.yoshikawa@gmail.com \
    --cc=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=mtosatti@redhat.com \
    --cc=peterz@infradead.org \
    --cc=yoshikawa.takuya@oss.ntt.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).