linux-security-module.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Ravi Kumar Siddojigari" <rsiddoji@codeaurora.org>
To: "'Paul Moore'" <paul@paul-moore.com>,
	"'Stephen Smalley'" <sds@tycho.nsa.gov>
Cc: <selinux@vger.kernel.org>, <linux-security-module@vger.kernel.org>
Subject: RE: Looks like issue in handling active_nodes count in 4.19 kernel .
Date: Thu, 19 Dec 2019 15:18:45 +0530	[thread overview]
Message-ID: <002701d5b651$8434b2b0$8c9e1810$@codeaurora.org> (raw)
In-Reply-To: <CAHC9VhQYA8uTRQ0OajEmsTrDytNVx+BSiL5vEsGefKEhAw+gKA@mail.gmail.com>

Sorry , Re-adding the patch  below as requested. 

Stephen , 
Issue is fixed with this  2 changes , Issue as even reproduced on v4.14 and  similar changes work there also . 

--
From 77c618006397c7a65ead257f3cb4e4fe3da2d4b8 Mon Sep 17 00:00:00 2001
From: Jaihind Yadav <jaihindyadav@codeaurora.org>
Date: Tue, 17 Dec 2019 17:25:47 +0530
Subject: [PATCH] selinux: ensure we cleanup the internal AVC counters on error
 in avc_update()

In AVC update we don't call avc_node_kill() when avc_xperms_populate()
fails, resulting in the avc->avc_cache.active_nodes counter having a
false value. In last patch this changes was missed , so correcting it.

Change-Id: Ic0298162cc766c0f21be7ab232e259766654dad3
Signed-off-by: Ravi Kumar Siddojigari <rsiddoji@codeaurora.org>
---
 security/selinux/avc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/security/selinux/avc.c b/security/selinux/avc.c
index 91d24c2..3d1cff2 100644
--- a/security/selinux/avc.c
+++ b/security/selinux/avc.c
@@ -913,7 +913,7 @@ static int avc_update_node(struct selinux_avc *avc,
        if (orig->ae.xp_node) {
                rc = avc_xperms_populate(node, orig->ae.xp_node);
                if (rc) {
-                       kmem_cache_free(avc_node_cachep, node);
+                       avc_node_kill(avc, node);
                        goto out_unlock;
                }
        }
--
1.9.1

Br,


-----Original Message-----
From: selinux-owner@vger.kernel.org <selinux-owner@vger.kernel.org> On Behalf Of Paul Moore
Sent: Thursday, December 19, 2019 7:50 AM
To: Stephen Smalley <sds@tycho.nsa.gov>
Cc: Ravi Kumar Siddojigari <rsiddoji@codeaurora.org>; selinux@vger.kernel.org; linux-security-module@vger.kernel.org
Subject: Re: Looks like issue in handling active_nodes count in 4.19 kernel .

On Tue, Dec 17, 2019 at 10:51 AM Stephen Smalley <sds@tycho.nsa.gov> wrote:
>
> On 12/17/19 10:40 AM, Ravi Kumar Siddojigari wrote:
> > Yes  indeed this is a stress test on ARM64 device with multicore  where most of the cores /tasks are stuck  in avc_reclaim_node .
> > We still see this issue even after picking the earlier patch " selinux: ensure we cleanup the internal AVC counters on error in avc_insert() commit: d8db60cb23e4"
> > Where selinux_state  during issue was as below where all the slots are  NULL and the count was more than threshold.
> > Which seem to be calling avc_reclaim_node always and as the all the slots are empty its going for full for- loop with locks and unlock and taking too long .
> > Not sure what could make the  slots null , for sure its not due to flush() /Reset(). We think that still we need to call  avc_kill_node  in update_node function .
> > Adding the patch below can you please review or correct the following patch .
> >
> >
> >    selinux_state = (
> >      disabled = FALSE,
> >      enforcing = TRUE,
> >      checkreqprot = FALSE,
> >      initialized = TRUE,
> >      policycap = (TRUE, TRUE, TRUE, FALSE, FALSE, TRUE),
> >      avc = 0xFFFFFF9BEFF1E890 -> (
> >        avc_cache_threshold = 512,  /* <<<<<not configured and its with default*/
> >        avc_cache = (
> >          slots = ((first = 0x0), (first = 0x0), (first = 0x0), (first = 0x0), (first = 0x0), (first = 0x0), (first = 0x0), (first = 0x0), (first = 0x0), (first = 0x0), (first = 0x0), (first = 0x0), (first   /*<<<< all are NULL */
> >          slots_lock = ((rlock = (raw_lock = (val = (counter = 0), locked = 0, pending = 0, locked_pending = 0, tail = 0), magic = 3735899821, owner_cpu = 4294967295, owner = 0xFFFFFFFFFFFFFFFF, dep_map = (key = 0xFFFFFF9BEFF298A8, cla
> >          lru_hint = (counter = 616831529),
> >          active_nodes = (counter = 547),   /*<<<<< increased more than 512*/
> >          latest_notif = 1)),
> >      ss = 0xFFFFFF9BEFF2E578)
> >
> >
> > --
> > In AVC update we don't call avc_node_kill() when 
> > avc_xperms_populate() fails, resulting in the 
> > avc->avc_cache.active_nodes counter having a false value.In last patch this changes was missed , so correcting it.
> >
> > Change-Id: Ic0298162cc766c0f21be7ab232e259766654dad3
> > Signed-off-by: Jaihind Yadav<jaihindyadav@codeaurora.org>
> > ---
> >   security/selinux/avc.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/security/selinux/avc.c b/security/selinux/avc.c index 
> > 91d24c2..3d1cff2 100644
> > --- a/security/selinux/avc.c
> > +++ b/security/selinux/avc.c
> > @@ -913,7 +913,7 @@ static int avc_update_node(struct selinux_avc *avc,
> >          if (orig->ae.xp_node) {
> >                  rc = avc_xperms_populate(node, orig->ae.xp_node);
> >                  if (rc) {
> > -                       kmem_cache_free(avc_node_cachep, node);
> > +                       avc_node_kill(avc, node);
> >                          goto out_unlock;
> >                  }
> >          }
> > --
>
> That looks correct to me; I guess that one got missed by the prior fix.
> Still not sure how your AVC got into that state though...
>
> Acked-by: Stephen Smalley <sds@tycho.nsa.gov>

This looks good to me too.  Ravi, can you submit this as a proper patch with From: set to Jaihing Yadav (assuming they are the author) and your sign-off?

Thanks.

--
paul moore
www.paul-moore.com

  reply	other threads:[~2019-12-19  9:48 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <0101016eeb5fdf43-18f58c0b-8670-43eb-ad08-60dae381f0fd-000000@us-west-2.amazonses.com>
2019-12-09 18:05 ` Looks like issue in handling active_nodes count in 4.19 kernel Stephen Smalley
2019-12-09 18:30   ` rsiddoji
2019-12-11 14:37     ` Stephen Smalley
2019-12-11 14:47       ` Stephen Smalley
2019-12-11 15:35         ` rsiddoji
     [not found]         ` <0101016ef59a2152-41e65aac-8784-4401-b20d-45b2852872d4-000000@us-west-2.amazonses.com>
2019-12-11 15:53           ` Stephen Smalley
2019-12-17 15:40             ` Ravi Kumar Siddojigari
2019-12-17 15:52               ` Stephen Smalley
2019-12-17 16:23                 ` Stephen Smalley
2019-12-18  5:58                   ` Ravi Kumar Siddojigari
2019-12-18 13:53                     ` Stephen Smalley
2019-12-19  2:20                 ` Paul Moore
2019-12-19  9:48                   ` Ravi Kumar Siddojigari [this message]
2019-12-19 16:00                     ` Stephen Smalley
2019-12-19 18:11                     ` Paul Moore
2019-12-20 12:03                       ` Ravi Kumar Siddojigari
2019-12-21 16:02                         ` Paul Moore
2019-12-09 15:55 rsiddoji

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='002701d5b651$8434b2b0$8c9e1810$@codeaurora.org' \
    --to=rsiddoji@codeaurora.org \
    --cc=linux-security-module@vger.kernel.org \
    --cc=paul@paul-moore.com \
    --cc=sds@tycho.nsa.gov \
    --cc=selinux@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).