xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Malcolm Crossley <malcolm.crossley@citrix.com>
To: Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	keir@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH 2/2] grant_table: convert grant table rwlock to percpu rwlock
Date: Wed, 18 Nov 2015 13:08:35 +0000	[thread overview]
Message-ID: <564C7853.2090304@citrix.com> (raw)
In-Reply-To: <1447848461.23626.48.camel@citrix.com>

On 18/11/15 12:07, Ian Campbell wrote:
> On Wed, 2015-11-18 at 11:56 +0000, Malcolm Crossley wrote:
>> On 18/11/15 11:50, Ian Campbell wrote:
>>> On Wed, 2015-11-18 at 11:23 +0000, Malcolm Crossley wrote:
>>>> On 18/11/15 10:54, Jan Beulich wrote:
>>>>>>>> On 18.11.15 at 11:36, <ian.campbell@citrix.com> wrote:
>>>>>> On Tue, 2015-11-17 at 17:53 +0000, Andrew Cooper wrote:
>>>>>>> On 17/11/15 17:39, Jan Beulich wrote:
>>>>>>>>>>> On 17.11.15 at 18:30, <andrew.cooper3@citrix.com>
>>>>>>>>>>> wrote:
>>>>>>>>> On 17/11/15 17:04, Jan Beulich wrote:
>>>>>>>>>>>>> On 03.11.15 at 18:58, <malcolm.crossley@citrix.com>
>>>>>>>>>>>>> wrote:
>>>>>>>>>>> --- a/xen/common/grant_table.c
>>>>>>>>>>> +++ b/xen/common/grant_table.c
>>>>>>>>>>> @@ -178,6 +178,10 @@ struct active_grant_entry {
>>>>>>>>>>>  #define _active_entry(t, e) \
>>>>>>>>>>>      ((t)-
>>>>>>>>>>>> active[(e)/ACGNT_PER_PAGE][(e)%ACGNT_PER_PAGE])
>>>>>>>>>>>  
>>>>>>>>>>> +bool_t grant_rwlock_barrier;
>>>>>>>>>>> +
>>>>>>>>>>> +DEFINE_PER_CPU(rwlock_t *, grant_rwlock);
>>>>>>>>>> Shouldn't these be per grant table? And wouldn't doing so
>>>>>>>>>> eliminate
>>>>>>>>>> the main limitation of the per-CPU rwlocks?
>>>>>>>>> The grant rwlock is per grant table.
>>>>>>>> That's understood, but I don't see why the above items
>>>>>>>> aren't,
>>>>>>>> too.
>>>>>>>
>>>>>>> Ah - because there is never any circumstance where two grant
>>>>>>> tables
>>>>>>> are
>>>>>>> locked on the same pcpu.
>>>>>>
>>>>>> So per-cpu rwlocks are really a per-pcpu read lock with a
>>>>>> fallthrough
>>>>>> to a
>>>>>> per-$resource (here == granttable) rwlock when any writers are
>>>>>> present for
>>>>>> any instance $resource, not just the one where the write lock is
>>>>>> desired,
>>>>>> for the duration of any write lock?
>>>>>
>>>>
>>>> The above description is the very good for for how the per-cpu
>>>> rwlocks behave.
>>>> The code stores a pointer to the per-$resource in the percpu area
>>>> when a user is
>>>> reading the per-$resource, this is why the lock is not safe if you
>>>> take the lock
>>>> for two different per-$resource simultaneously. The grant table code
>>>> only takes
>>>> one grant table lock at any one time so it is a safe user.
>>>
>>> So essentially the "per-pcpu read lock" as I called it is really in
>>> essence
>>> a sort of "byte lock" via the NULL vs non-NULL state of the per-cpu
>>> pointer
>>> to the underlying rwlock.
>>
>> It's not quite a byte lock because it stores a full pointer to the per-$resource
>> that it's using. It could be changed to be a byte lock but then you will need a
>> percpu area per-$resource.
> 
> Right, I said "in essence sort of" and put scare quotes around the "byte
> lock" since I realise it's not literally a byte lock.
> 
> But really all I was getting was that it has locked and unlocked states in
> some form or other.

I was just concerned that people may not pick up on the subtle difference that the
percpu read areas are used for multiple resources (of which none are locked simultaneously
by the same CPU) where as byte locks are typically used to lock a particular
resource and so you can safely lock multiple resource simultaneously on the same CPU.

> 
> (Maybe I should have said "like a bit lock with 32 or 64 bits, setting any
> of which corresponds to acquiring the lock" ;-))
> 
Not quite, setting the per cpu read area "takes" the read lock for the particular
resource you passed into the percpu rwlock implementation. Writers of another resource
($resource1) will safely ignore readers of ($resource0).

The global barrier will however make _all_ readers take the per-$resource read lock.
An optimisation could be to have a barrier variable per-$resource (stored in the
struct grant_table in this case).

Malcolm


> iAN.
> 

  reply	other threads:[~2015-11-18 13:08 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-03 17:58 [PATCH 1/2] rwlock: add per-cpu reader-writer locks Malcolm Crossley
2015-11-03 17:58 ` [PATCH 2/2] grant_table: convert grant table rwlock to percpu rwlock Malcolm Crossley
2015-11-17 17:04   ` Jan Beulich
2015-11-17 17:30     ` Andrew Cooper
2015-11-17 17:39       ` Jan Beulich
2015-11-17 17:53         ` Andrew Cooper
2015-11-18  7:45           ` Jan Beulich
2015-11-18 10:06             ` Andrew Cooper
2015-11-18 10:48               ` Jan Beulich
2015-11-18 10:36           ` Ian Campbell
2015-11-18 10:54             ` Jan Beulich
2015-11-18 11:23               ` Malcolm Crossley
2015-11-18 11:41                 ` Jan Beulich
2015-11-18 11:50                   ` Malcolm Crossley
2015-11-18 11:50                 ` Ian Campbell
2015-11-18 11:56                   ` Malcolm Crossley
2015-11-18 12:07                     ` Ian Campbell
2015-11-18 13:08                       ` Malcolm Crossley [this message]
2015-11-18 13:47                         ` Jan Beulich
2015-11-18 14:22                         ` Ian Campbell
2015-11-18 20:02       ` Konrad Rzeszutek Wilk
2015-11-19  9:03         ` Malcolm Crossley
2015-11-19 10:09         ` Andrew Cooper
2015-11-05 13:48 ` [PATCH 1/2] rwlock: add per-cpu reader-writer locks Marcos E. Matsunaga
2015-11-05 15:20   ` Malcolm Crossley
2015-11-05 15:46     ` Marcos E. Matsunaga
2015-11-17 17:00 ` Jan Beulich
2015-11-18 13:49   ` Malcolm Crossley
2015-11-18 14:15     ` Jan Beulich
2015-11-18 16:21       ` Malcolm Crossley
2015-11-18 17:04         ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=564C7853.2090304@citrix.com \
    --to=malcolm.crossley@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=keir@xen.org \
    --cc=stefano.stabellini@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).