public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Kaigai Kohei" <kaigai@ak.jp.nec.com>
To: <paulmck@us.ibm.com>
Cc: "James Morris" <jmorris@redhat.com>,
	"Stephen Smalley" <sds@epoch.ncsc.mil>,
	"SELinux-ML(Eng)" <selinux@tycho.nsa.gov>,
	"Linux Kernel ML(Eng)" <linux-kernel@vger.kernel.org>
Subject: Re: RCU issue with SELinux (Re: SELINUX performance issues)
Date: Tue, 24 Aug 2004 16:27:05 +0900	[thread overview]
Message-ID: <043201c489ab$c16fd080$f97d220a@linux.bs1.fc.nec.co.jp> (raw)
In-Reply-To: 20040820201914.GA1244@us.ibm.com

Hi Paul, Thanks for your comments.

> A few comments interspersed for the larger patch (look for empty lines).
> I am missing why some of the list insertions and deletions are SMP safe.

Sorry, some of them were my bug.

> > + if (spin_trylock(&avc_cache.slots_lock[hvalue])) {
> > + list_for_each_rcu(pos, &avc_cache.slots[hvalue]) {
> 
> Since we are holding the lock, the list cannot change, and the _rcu()
> is not needed.  Right?

No, only insert/update/delete paths must hold the lock.
The _rcu suffix is necessary since read-only path does not hold the lock.
(It was my bug the inserting path did not hold the lock.)

> > + node = list_entry(pos, struct avc_node, list);
> 
> Why not just do:
> 
> list_for_each_entry(pos, &avc_cache.slots[hvalue], list) {
> 
> rather than having the separate list_entry().

Your suggestion is better. I fixed it.

> > + if (org) {
> > + if (!new) {
> 
> If we run out of memory, we silently delete the node that we were
> wanting to update?  Is this really what you want?
> 
> OK, OK, given that the "C" in "AVC" stands for "cache", I guess that
> deleting the element does indeed make sense...

Indeed, I thought it is pretty rough, and fixed avc_insert().
Now, kmalloc() is unecessary in avc_insert().
Therefore, the updating always succeeds.
(But pretty more memory is required.)

> > + switch (event) {
> > + case AVC_CALLBACK_GRANT:
> > + new->ae.avd.allowed |= perms;
> 
> This is a 32-bit field, and is protected by a lock.  Therefore, only
> one update should be happening at a time.  This field is 32-bit aligned
> (right?), and so the write that does the update will appear atomic to
> readers.  Only one of the the updates happens in a given call.

Is it about "new->ae.avd.allowed |= perms;" ?
No others can refer the 'new', since the 'new' is the duplicated entry
of the 'org'. It is safe irrespective of the lock held or unheld.

In addition, the problem which had its roots in avc_insert()
was solved by the take2 patch.

> o There is some bizarre CPU out there that does not do
> atomic 32-bit writes (but this would break all sorts of
> stuff).
> 
> o Consecutive changes might give a slow reader the incorrect
> impression that permissions are wide open.  One way to take
> care of this would be to force a grace period between each
> change.  One way to do this would be to set a bit indicating
> an update, and have the call_rcu() clear it.  When getting
> ready to update, if the bit is still set, block until the
> bit is cleared.  This would guarantee that a given reader
> would see at most one update.
> 
> But this would require considerable code restructuring, since
> this code is still under an rcu_read_lock().  But putting
> forward the thought anyway in case it is helpful.
> 
> If updates are -really- rare, a simple way to make this happen
> would be to protect the updates with a sema_t, and just do
> an unconditional synchronize_kernel() just before releasing
> the sema_t.
> 
> Since the current code permits readers to see changes to
> several different nodes, a single synchronize_kernel() should
> suffice.

> > + list_del_rcu(&node->list);
> 
> I don't see what prevents multiple list_del_rcu()s from executing in
> parallel, mangling the list.  Is there some higher-level lock?
> If so, why do we need RCU protecting the list?

I agree, and fixed it.

> > + rcu_read_lock();
> > + node = avc_insert(ssid,tsid,tclass,&entry);
> 
> I don't see what prevents two copies of avc_insert from running in parallel.
> Seems like this would trash the lists.  Or, if there is a higher-level lock
> that I am missing, why do we need RCU protecting the lists?

I agree, and fixed it.

Thanks.
--------
Kai Gai <kaigai@ak.jp.nec.com>


  parent reply	other threads:[~2004-08-24  7:29 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-08-16  9:33 RCU issue with SELinux (Re: SELINUX performance issues) Kaigai Kohei
2004-08-16 15:19 ` James Morris
2004-08-20 13:36   ` Kaigai Kohei
2004-08-20 14:53     ` James Morris
2004-08-24  7:27       ` Kaigai Kohei
2004-08-24 13:24         ` James Morris
2004-08-25  9:51           ` Kaigai Kohei
2004-08-25 18:31             ` James Morris
2004-08-25  9:52           ` [PATCH]atomic_inc_return() for i386/x86_64 (Re: RCU issue with SELinux) Kaigai Kohei
2004-08-20 17:31     ` RCU issue with SELinux (Re: SELINUX performance issues) Luke Kenneth Casson Leighton
2004-08-20 18:15       ` James Morris
2004-08-20 20:19     ` Paul E. McKenney
2004-08-20 20:35       ` James Morris
2004-08-24  7:27       ` Kaigai Kohei [this message]
     [not found]     ` <1093014789.16585.186.camel@moss-spartans.epoch.ncsc.mil>
2004-08-24  7:25       ` Kaigai Kohei
2004-08-24 15:37         ` Stephen Smalley
2004-08-25  9:51           ` Kaigai Kohei
2004-08-25 15:50             ` Stephen Smalley
2004-08-25 16:11               ` Stephen Smalley
2004-08-26  7:53               ` Kaigai Kohei
2004-08-26 13:24                 ` Stephen Smalley
2004-08-27 11:07                   ` Kaigai Kohei
2004-08-30 11:17                   ` [PATCH]SELinux performance improvement by RCU (Re: RCU issue with SELinux) Kaigai Kohei
2004-08-30 15:35                     ` Stephen Smalley
2004-08-30 16:13                       ` Paul E. McKenney
2004-08-31  4:33                         ` Kaigai Kohei
2004-08-31 16:20                           ` Paul E. McKenney
2004-08-31 15:33                     ` James Morris
2004-08-24 23:02         ` RCU issue with SELinux (Re: SELINUX performance issues) Paul E. McKenney
2004-08-25  9:51           ` Kaigai Kohei
2004-08-25 17:34             ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='043201c489ab$c16fd080$f97d220a@linux.bs1.fc.nec.co.jp' \
    --to=kaigai@ak.jp.nec.com \
    --cc=jmorris@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paulmck@us.ibm.com \
    --cc=sds@epoch.ncsc.mil \
    --cc=selinux@tycho.nsa.gov \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox