public inbox for linux-audit@redhat.com
 help / color / mirror / Atom feed
* content and format?
@ 2006-12-14 17:24 John Calcote
  2006-12-14 18:25 ` Steve Grubb
  0 siblings, 1 reply; 5+ messages in thread
From: John Calcote @ 2006-12-14 17:24 UTC (permalink / raw)
  To: linux-audit

[-- Attachment #1: Type: text/plain, Size: 271 bytes --]

So what's in the future for linux audit regarding content and format? My company's really interested in this aspect of audit in order to provide good analytical tools for audit logs.

Anyone?

-----
John Calcote (jcalcote@novell.com)
Sr. Software Engineeer
Novell, Inc.


[-- Attachment #2: John Calcote.vcf --]
[-- Type: text/plain, Size: 410 bytes --]

BEGIN:VCARD
VERSION:2.1
X-GWTYPE:USER
FN:John Calcote
TEL;WORK:1-801-861-7517
ORG:;Unified Identity System Eng TE
TEL;PREF;FAX:801/861-2292
EMAIL;WORK;PREF;NGW:JCALCOTE@novell.com
N:Calcote;John;;Sr. Software Engineer
TITLE:Sr. Software Engineer
ADR;DOM;WORK;PARCEL;POSTAL:;PRV-H-511;;Provo
LABEL;DOM;WORK;PARCEL;POSTAL;ENCODING=QUOTED-PRINTABLE:John Calcote=0A=
PRV-H-511=0A=
Provo
END:VCARD


[-- Attachment #3: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: content and format?
  2006-12-14 17:24 content and format? John Calcote
@ 2006-12-14 18:25 ` Steve Grubb
  2007-01-02 17:51   ` John Calcote
  2007-01-02 22:12   ` John Calcote
  0 siblings, 2 replies; 5+ messages in thread
From: Steve Grubb @ 2006-12-14 18:25 UTC (permalink / raw)
  To: linux-audit

On Thursday 14 December 2006 12:24, John Calcote wrote:
> So what's in the future for linux audit regarding content and format?

I think we should be in position to allow reformatting of audit information on 
the fly early next year. I think the key to doing this as well as creating 
many new tools will hinge on the audit parsing library.

This library has been spec'ed out and designed with higher level languages in 
mind. http://people.redhat.com/sgrubb/audit/audit-parse.txt The first problem 
that anyone runs into if they want to make tools is how to parse the events. 
This library will let you get past having to study all the messages to create 
parsing rules.

The audit daemon has been created with a realtime interface so that other 
analytical programs can get their hands on the data in near realtime. This 
offers a lot of advantages over cron based techniques that read from a file. 
The realtime interface lets the daemon itself be simple so that it can pass a 
CAPP/LSPP eval and yet offer expansion capabilities.

The plan to allow other formats, reactive programs, or centralized logging is 
to create a dispatcher that reads the output of the daemon and hands the data 
to programs that have subscribed to it. Right now, we have a primitive 
dispatcher to test the concept out with SE Linux where a program analyzes 
events and offers help to users if they see a pattern that would suggest a 
boolean needs to be changed.

There is another dispatcher that is close to what I am thinking of:
http://www.linuon.com/dowloads/led/

Anyways, what we can do is have a plugin that takes audit events and uses the 
parser library to extract the fields its needs for a message and then write 
it to disk or send it across the network.

John, would this scheme work for you?

-Steve

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: content and format?
  2006-12-14 18:25 ` Steve Grubb
@ 2007-01-02 17:51   ` John Calcote
  2007-01-02 22:12   ` John Calcote
  1 sibling, 0 replies; 5+ messages in thread
From: John Calcote @ 2007-01-02 17:51 UTC (permalink / raw)
  To: linux-audit, Steve Grubb; +Cc: Pat Felsted

[-- Attachment #1: Type: text/plain, Size: 2524 bytes --]

Steve,

Sorry it took so long to get back to you on this note. I was out of town for a week and then took Christmas break and tried to stay away from work during this time. :). 

This looks good Steve. I can't wait to see more details as you make them available. I can't see immediately how this parser library will work, but I'm sure that with the information you'll be providing in the next few months I can make them work together so my team can provide/access all of the benefits of laf in our project.

I'll follow the list and keep you posted on what we're doing - just FYI.

--john

-----
John Calcote (jcalcote@novell.com)
Sr. Software Engineeer
Novell, Inc.


>>> Steve Grubb <sgrubb@redhat.com> 12/14/06 11:25 AM >>>
On Thursday 14 December 2006 12:24, John Calcote wrote:
> So what's in the future for linux audit regarding content and format?

I think we should be in position to allow reformatting of audit information on 
the fly early next year. I think the key to doing this as well as creating 
many new tools will hinge on the audit parsing library.

This library has been spec'ed out and designed with higher level languages in 
mind. http://people.redhat.com/sgrubb/audit/audit-parse.txt The first problem 
that anyone runs into if they want to make tools is how to parse the events. 
This library will let you get past having to study all the messages to create 
parsing rules.

The audit daemon has been created with a realtime interface so that other 
analytical programs can get their hands on the data in near realtime. This 
offers a lot of advantages over cron based techniques that read from a file. 
The realtime interface lets the daemon itself be simple so that it can pass a 
CAPP/LSPP eval and yet offer expansion capabilities.

The plan to allow other formats, reactive programs, or centralized logging is 
to create a dispatcher that reads the output of the daemon and hands the data 
to programs that have subscribed to it. Right now, we have a primitive 
dispatcher to test the concept out with SE Linux where a program analyzes 
events and offers help to users if they see a pattern that would suggest a 
boolean needs to be changed.

There is another dispatcher that is close to what I am thinking of:
http://www.linuon.com/dowloads/led/ 

Anyways, what we can do is have a plugin that takes audit events and uses the 
parser library to extract the fields its needs for a message and then write 
it to disk or send it across the network.

John, would this scheme work for you?

-Steve

[-- Attachment #2: John Calcote.vcf --]
[-- Type: text/plain, Size: 410 bytes --]

BEGIN:VCARD
VERSION:2.1
X-GWTYPE:USER
FN:John Calcote
TEL;WORK:1-801-861-7517
ORG:;Unified Identity System Eng TE
TEL;PREF;FAX:801/861-2292
EMAIL;WORK;PREF;NGW:JCALCOTE@novell.com
N:Calcote;John;;Sr. Software Engineer
TITLE:Sr. Software Engineer
ADR;DOM;WORK;PARCEL;POSTAL:;PRV-H-511;;Provo
LABEL;DOM;WORK;PARCEL;POSTAL;ENCODING=QUOTED-PRINTABLE:John Calcote=0A=
PRV-H-511=0A=
Provo
END:VCARD


[-- Attachment #3: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: content and format?
  2006-12-14 18:25 ` Steve Grubb
  2007-01-02 17:51   ` John Calcote
@ 2007-01-02 22:12   ` John Calcote
  2007-01-02 22:54     ` Steve Grubb
  1 sibling, 1 reply; 5+ messages in thread
From: John Calcote @ 2007-01-02 22:12 UTC (permalink / raw)
  To: linux-audit, Steve Grubb

Steve,

On closer re-reading of this response, I find I have a few more questions:

>>> Steve Grubb <sgrubb@redhat.com> 12/14/06 11:25 AM >>>
On Thursday 14 December 2006 12:24, John Calcote wrote:
>> So what's in the future for linux audit regarding content and format?

> I think we should be in position to allow reformatting of audit information on 
> the fly early next year. I think the key to doing this as well as creating 
> many new tools will hinge on the audit parsing library.

> This library has been spec'ed out and designed with higher level languages in 
> mind. http://people.redhat.com/sgrubb/audit/audit-parse.txt The first problem 
> that anyone runs into if they want to make tools is how to parse the events. 
> This library will let you get past having to study all the messages to create 
> parsing rules.

I guess what I'm wondering is this: Why is a parser library even necessary? I'm not questioning your intent - I'm wondering what the rationale is behind such a library. Is it that you're trying to find a way of not defining ANY formatting rules or event taxonomy (standardized event type hierarchy) so the system will be as useful as possible to as many domains as possible?

If so, great! This is a wonderful ideal, but is it really necessary? I mean, this is a fairly new system, and everyone understands that it's being defined (by this community) right now. Isn't this a great opportunity (really the only opportunity) to impose some reasonable amount of message and event type structure so that everyone get's the incredible advantages of standardized structured audit data? A reasonable amount of structure and taxonomy information also alleviates about 80 percent of the need for a parsing library, as such a library would primarily be focused on returning standard field values - it becomes almost trivial. But more importantly - it becomes very accurate - which (IMHO) is paramount in an auditing system.

> The audit daemon has been created with a realtime interface so that other 
> analytical programs can get their hands on the data in near realtime. This 
> offers a lot of advantages over cron based techniques that read from a file. 
> The realtime interface lets the daemon itself be simple so that it can pass a 
> CAPP/LSPP eval and yet offer expansion capabilities.
> 
> The plan to allow other formats, reactive programs, or centralized logging is 
> to create a dispatcher that reads the output of the daemon and hands the data 
> to programs that have subscribed to it. Right now, we have a primitive 
> dispatcher to test the concept out with SE Linux where a program analyzes 
> events and offers help to users if they see a pattern that would suggest a 
> boolean needs to be changed.

> There is another dispatcher that is close to what I am thinking of:
> http://www.linuon.com/dowloads/led/ 

> Anyways, what we can do is have a plugin that takes audit events and uses the 
> parser library to extract the fields its needs for a message and then write 
> it to disk or send it across the network.

> John, would this scheme work for you?

This sounds great! Please don't take my comments as derisive - I mean only to provide constructive input. I totally recognize you guys are the experts in this area, and I certainly don't claim to understand half of what you're talking about on this list with respect to transport, performance, bandwidth and efficiency.

I've spent a lot of my time at a much higher level - CIO level in corporations clambering for better auditing tools. Maybe we're trying to solve two different problems - I don't know - but one thing I'm very aware of is that a system that allows AUTOMATIC analysis of audit logs with a high degree of accuracy is one that will make a LOT of IT departments happy.

I'm also open to the (strong) possibility that I'm missing the bigger picture. My take on quality automatic analysis revolves primarily around a standard imposed event format and taxonomy - perhaps one enforced by the logging UI, perhaps not. In either case, however, a small amount of standardized structure and event taxonomy has a great many benefits in the world of audit log auto-analysis tools.

What other options might I be missing? Any responses would be appreciated greatly.

Thanks!
--John

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: content and format?
  2007-01-02 22:12   ` John Calcote
@ 2007-01-02 22:54     ` Steve Grubb
  0 siblings, 0 replies; 5+ messages in thread
From: Steve Grubb @ 2007-01-02 22:54 UTC (permalink / raw)
  To: John Calcote; +Cc: linux-audit

On Tuesday 02 January 2007 17:12, John Calcote wrote:
> > This library will let you get past having to study all the messages to
> > create parsing rules. 
>
> I guess what I'm wondering is this: Why is a parser library even necessary?

So that you can disassemble the incoming event and rewrite it the way you 
want.

> I'm not questioning your intent - I'm wondering what the rationale is
> behind such a library. Is it that you're trying to find a way of not
> defining ANY formatting rules or event taxonomy (standardized event type
> hierarchy) so the system will be as useful as possible to as many domains
> as possible?

Sort of. The current audit system provides more information than many people 
would like. (Do you really need fsgid? probably not). Its not necessary to 
keep all this information if you don't need it. Or if you'd rather have it 
written in an order that's more intuitive to you, or compatible with a 
certain protocol. The first step is to decompose the native event format and 
then you are free to restructure it to your liking.

> If so, great! This is a wonderful ideal, but is it really necessary?

I think so. You might not want subj/obj labeling but someone else might. Same 
with any other attribute.

> I mean, this is a fairly new system, and everyone understands that it's
> being defined (by this community) right now. Isn't this a great opportunity
> (really the only opportunity) to impose some reasonable amount of message
> and event type structure so that everyone get's the incredible advantages
> of standardized structured audit data?

The current audit events are structured. At some point in the future, I'd like 
to allow for binary event data. The parsing library is is a step in that 
direction by providing a layer of abstraction from the low level audit record 
formatting. Analytical apps should not concern themselves with the details of 
the format on disk or buffer.

> A reasonable amount of structure and taxonomy information also alleviates
> about 80 percent of the need for a parsing library, as such a library would
> primarily be focused on returning standard field values - it becomes almost
> trivial.

That's what I'm building and yes it is trivial.

> But more importantly -  it becomes very accurate - which (IMHO) is paramount
> in an auditing system.

The current audit system should be very accurate. It provides more information 
than you probably need.

> > John, would this scheme work for you?
>
> This sounds great! Please don't take my comments as derisive - I mean only
> to provide constructive input.

Sure.

> I've spent a lot of my time at a much higher level - CIO level in
> corporations clambering for better auditing tools.

We'll get there. Its just taking a long time since I have lots of other things 
to do besides this.

> Maybe we're trying to solve two different problems - I don't know - but one
> thing I'm very aware of is that a system that allows AUTOMATIC analysis of
> audit logs with a high degree of accuracy is one that will make a LOT of IT
> departments happy.

No, we're not solving different problems. The intent of this design is to 
allow it to meet audit level 9 in dcid 6/3. It will eventually do real time 
response to threats. I also look at SOX and FIPS-200 requirements for future 
direction.

> I'm also open to the (strong) possibility that I'm missing the bigger
> picture.

Could be. I don't have much time to write papers about what this thing will do 
or what its current capabilities are. But progress is steady towards better 
event information, flexibility, and analysis. This mail list is probably the 
best way to find out the future plans and how it works until I (or others) 
have a chance to document it.

> My take on quality automatic analysis revolves primarily around a 
> standard imposed event format and taxonomy 

This should be trivial. It should be mostly a reorg of data in events. There 
may be some things not required by CAPP/LSPP that people would like the audit 
system to collect. We can add those when they are identified. Can you give me 
an example of a standard event? We can map it onto what the current audit 
system is providing to see if anything is missing.

-Steve

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2007-01-02 22:54 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-12-14 17:24 content and format? John Calcote
2006-12-14 18:25 ` Steve Grubb
2007-01-02 17:51   ` John Calcote
2007-01-02 22:12   ` John Calcote
2007-01-02 22:54     ` Steve Grubb

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox