linux-audit.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Steve Grubb <sgrubb@redhat.com>
To: linux-audit@redhat.com, burn@swtf.dyndns.org
Subject: Re: New draft standards
Date: Sat, 26 Dec 2015 11:38:58 -0500	[thread overview]
Message-ID: <7685993.nLr6Gj3DPT@x2> (raw)
In-Reply-To: <1450910640.3232.18.camel@swtf.swtf.dyndns.org>

On Thursday, December 24, 2015 09:44:00 AM Burn Alting wrote:
> On Fri, 2015-12-18 at 16:12 +1100, Burn Alting wrote:
> > On Tue, 2015-12-15 at 08:46 -0500, Steve Grubb wrote:
> > > On Tuesday, December 15, 2015 09:12:54 AM Burn Alting wrote:
> > > > I use a proprietary ELK-like system based on ausearch's -i option. I
> > > > would
> > > > like to see some variant outputs from ausearch that "packages" events
> > > > into
> > > > parse-friendly formats (json, xml) that also incorporates the local
> > > > transformations Steve proposes. I believe this would be the most
> > > > generic
> > > > solution to support centralised log management.
> > > > 
> > > > I am travelling now, but can write up a specification for review next
> > > > week.
> > > 
> > > Yes, please do send something to the mail list for people to look at and
> > > comment on.
> > 
> > All,
> > 
> > To reiterate, my need is to generate easy to parse events over which
> > local interpretation has been applied, retaining raw input to the some
> > of the interpretations if required. I want to then transmit the complete
> > interpreted event to my central event repository.
> > 
> > My proposal is that ausearch gains the following 'interpreted output'
> > options
> > 
> >         --Xo plain|json|xml
> >         generate plain (cf --interpret), xml or json formatted events
> >         
> >         --Xr key_a'+'key_b'+'key_c
> >         include raw value for given keys using the the new key
> >         __r_key_a, __r_key_b, etc. The special key __all__ is
> >         interpreted to retain the complete raw record. If the raw value
> >         has no interpreted value, then we will end up with two keys with
> >         the same value.
> > 
> > I have attached the XSD from which the XML and JSON formats could be
> > defined.
> 
> Is there any interest in this? If is was available, would people make
> use of it?

I'm somewhat interested in this. I'm just not sure where the best place to do 
all this is. Should it be in ausearch? Should it be in auditd? Should it be in 
the remote logging plugin? Should audit utilities be modified to accept this 
new form of input?

Ultimately, I am wanting to be able to reduce the audit records down to 
English sentences something like this:

On 1-node at 2-time 3-subj 4-acting-as 5-results 6-action 7-what 8-using

Which maps to
1) node
2) time
3) auid, failed logins=remote system
4) uid (only when uid != auid) or role (when not unconfined_t)
5) res - successfully / failed to
6) op, syscall, type, key - requires per type classification
7) path,system
8) exe,comm

So, what I was thinking about is looking at the whole event and picking out 
the node, time, subject, object, action, and results. The subject and object 
would be further broken down to primary identity, secondary identity, and 
attributes. I was planning to put this into an extension of auparse so that 
events could be dumped out using the classification system.

My thoughts had been to organize the event data to support something along 
these lines. I want to get the events easier to understand.

 
> If so I can modify ausearch and generate a proposed patch over the
> Christmas break.

At the moment, I'm looking at auditd performance improvements to prepare for 
the enrichment of audit records. You're one step ahead of where I am. I hope 
to finish this performance work soon so that I can start thinking about the 
problem you are.  :-)

Of course...we could look at the auditd performance issues together and then 
move on to event formatting.

-Steve

  reply	other threads:[~2015-12-26 16:38 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-08 19:22 New draft standards Steve Grubb
2015-12-08 19:58 ` Paul Moore
2015-12-08 20:25   ` Steve Grubb
2015-12-09  0:28     ` Paul Moore
2015-12-09  1:43       ` Burn Alting
2015-12-10 22:49         ` Steve Grubb
2015-12-10 22:59           ` Paul Moore
2015-12-15  5:11             ` Richard Guy Briggs
2015-12-10  4:35       ` Steve Grubb
2015-12-10 16:50         ` Paul Moore
2015-12-10 17:40         ` F Rafi
2015-12-14 15:34           ` Steve Grubb
2015-12-14 16:38             ` Joe Wulf
2015-12-14 17:01               ` Kevin.Dienst
2015-12-14 22:12                 ` Burn Alting
2015-12-15 13:46                   ` Steve Grubb
2015-12-18  5:12                     ` Burn Alting
2015-12-23 22:44                       ` Burn Alting
2015-12-26 16:38                         ` Steve Grubb [this message]
2015-12-27  0:30                           ` Burn Alting
2015-12-27 15:06                             ` Steve Grubb
2015-12-28  7:24                               ` Burn Alting
2015-12-29 19:28             ` LC Bruzenak
2015-12-08 20:49 ` Richard Guy Briggs
2015-12-08 21:28   ` Steve Grubb

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7685993.nLr6Gj3DPT@x2 \
    --to=sgrubb@redhat.com \
    --cc=burn@swtf.dyndns.org \
    --cc=linux-audit@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).