public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/6] Backing Store for sysfs
@ 2003-10-06  8:59 Maneesh Soni
  2003-10-06 16:08 ` Greg KH
  2003-10-06 18:44 ` Patrick Mochel
  0 siblings, 2 replies; 34+ messages in thread
From: Maneesh Soni @ 2003-10-06  8:59 UTC (permalink / raw)
  To: Al Viro, Patrick Mochel, Greg KH; +Cc: LKML, Dipankar Sarma


Hi,

The following patch set(mailed separately) provides a prototype for a backing 
store mechanism for sysfs. Currently sysfs pins all its dentries and inodes in 
memory there by wasting kernel lowmem even when it is not mounted. 

With this patch set we create sysfs dentry whenever it is required like 
other real filesystems and, age and free it as per the dcache rules. We
now save significant amount of Lowmem by avoiding un-necessary pinning. 
The following numbers were on a 2-way system with 6 disks and 2 NICs with 
about 1028 dentries. The numbers are just indicative as they are system
wide collected from /proc and are not exclusively for sysfs.

				2.6.0-test6		With patches.
Right after system is booted
---------------------------
dentry_cache (active)		2343			1315
inode_cache (active)		1058			30
LowFree				875096 KB		875900 KB

After mounting sysfs
-------------------
dentry_cache (active)		2350			1321
inode_cache (active)		1058			31
LowFree				875096 KB		875836 KB

After "find /sys"
-----------------
dentry_cache (active)		2520			2544
inode_cache (active)		1058			1050
LowFree				875032 KB		874748 KB

After un-mounting sysfs
-----------------------
dentry_cache (active)		2363			1319
inode_cache (active)		1058			30
LowFree				875032 KB		875836 KB


The main idea is not create the dentry in sysfs_create_xxx calls but create
the dentry when it is first lookup. We now have lookup() inode_operation, 
open and close file_operations for sysfs directory inodes. 

The backing store is based on the kobjects which are always there in memory.
sysfs lookup is based on hierarchy of kobjects. As the current kobject 
infrastructure donot provide any means to traverse the kobject's children or 
siblings, two-way hierarchy lookup was not possible. For this new fields 
are added to kobject structure. This ended up increasing the size of kobject
from 52 bytes to 108 bytes but saving one dentry and inode per kobject.

The details of the patches are in the following mails. For testing please
apply all the patches as they are splitted just for ease of review.

Please send me comments on the approach, implementation, missed things and 
suggestions to improve them.

Thanks,
Maneesh

-- 
Maneesh Soni
Linux Technology Center, 
IBM Software Lab, Bangalore, India
email: maneesh@in.ibm.com
Phone: 91-80-5044999 Fax: 91-80-5268553
T/L : 9243696

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
@ 2003-10-06 12:34 Christian Borntraeger
  0 siblings, 0 replies; 34+ messages in thread
From: Christian Borntraeger @ 2003-10-06 12:34 UTC (permalink / raw)
  To: Maneesh Soni; +Cc: Al Viro, Patrick Mochel, Greg KH, linux-kernel

> Hi,
> 
> The following patch set(mailed separately) provides a prototype for a 
backing 
> store mechanism for sysfs. Currently sysfs pins all its dentries and 
inodes in 
> memory there by wasting kernel lowmem even when it is not mounted. 
> 
> With this patch set we create sysfs dentry whenever it is required like 
> other real filesystems and, age and free it as per the dcache rules. We
> now save significant amount of Lowmem by avoiding un-necessary pinning. 

A more mature patch could be a possible solution of some problems we faced 
with sysfs.
I have s390 test system with ~ 20000 devices. Memory consumption _without_ 
this
patch is horribly high.
Slab uses 346028 kB of memory, most of it is dentry and inode cache. 
I tried the patch, its boots, memory usage is much better,  but it is 
somewhat 
broken with our ccw devices as I cannot bring up our ccwgroup network 
devices. 
Therefore I dont have reliable memory results.
Almost nobody would use 20000 devices on a S390, but with some shared 
OSA-card
100 or 200 devices is realistic. Even in this case, memory consumption is 
much higher
than with 2.4.

I still have to look closer on this patch, if there are some deeper 
problems. 
Until I find something, I think this patch could be really helpful for 
computers with lots of devices.

-- 
Mit freundlichen Grüßen / Best Regards

Christian Bornträger
IBM Deutschland Entwicklung GmbH
eServer SW  System Evaluation + Test
email: CBORNTRA@de.ibm.com
Tel +49  7031 16 1975


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06  8:59 Maneesh Soni
@ 2003-10-06 16:08 ` Greg KH
  2003-10-06 17:31   ` Dipankar Sarma
  2003-10-06 18:44 ` Patrick Mochel
  1 sibling, 1 reply; 34+ messages in thread
From: Greg KH @ 2003-10-06 16:08 UTC (permalink / raw)
  To: Maneesh Soni; +Cc: Al Viro, Patrick Mochel, LKML, Dipankar Sarma

On Mon, Oct 06, 2003 at 02:29:15PM +0530, Maneesh Soni wrote:
> 
> 				2.6.0-test6		With patches.
> -----------------
> dentry_cache (active)		2520			2544
> inode_cache (active)		1058			1050
> LowFree			875032 KB		874748 KB

So with these patches we actually eat up more LowFree if all sysfs
entries are searched, and make the dentry_cache bigger?  That's not good :(

Remember, every kobject that's created will cause a call to
/sbin/hotplug which will cause udev to walk the sysfs tree to get the
information for that kobject.  So I don't see any savings in these
patches, do you?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 16:08 ` Greg KH
@ 2003-10-06 17:31   ` Dipankar Sarma
  2003-10-06 17:38     ` Greg KH
  0 siblings, 1 reply; 34+ messages in thread
From: Dipankar Sarma @ 2003-10-06 17:31 UTC (permalink / raw)
  To: Greg KH; +Cc: Maneesh Soni, Al Viro, Patrick Mochel, LKML

On Mon, Oct 06, 2003 at 09:08:46AM -0700, Greg KH wrote:
> On Mon, Oct 06, 2003 at 02:29:15PM +0530, Maneesh Soni wrote:
> > 
> > 				2.6.0-test6		With patches.
> > -----------------
> > dentry_cache (active)		2520			2544
> > inode_cache (active)		1058			1050
> > LowFree			875032 KB		874748 KB
> 
> So with these patches we actually eat up more LowFree if all sysfs
> entries are searched, and make the dentry_cache bigger?  That's not good :(

My guess is that those 24 dentries are just noise. What we should
do is verify with a large number of devices if the numbers are all
that different after a walk of the sysfs tree.

> 
> Remember, every kobject that's created will cause a call to
> /sbin/hotplug which will cause udev to walk the sysfs tree to get the
> information for that kobject.  So I don't see any savings in these
> patches, do you?

Assuming that unused files/dirs are aged out of dentry and inode cache,
it should benefit. The numbers you should look at are -

--------------------------------------------------------
After mounting sysfs
-------------------
dentry_cache (active)           2350                    1321
inode_cache (active)            1058                    31
LowFree                         875096 KB               875836 KB
--------------------------------------------------------

That saves ~800KB. If you just mount sysfs and use a few files, you
aren't eating up dentries and inodes for every file in sysfs. How often 
do you expect hotplug events to happen in a system ? Some time after a 
hotplug event, dentries/inodes will get aged out and then you should see 
savings. It should greatly benefit in a normal system.

Now if the additional kobjects cause problems with userland hotplug, then 
that needs to be resolved. However that seems to be a different problem 
altogether. Could you please elaborate on that ?

Thanks
Dipankar

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
@ 2003-10-06 17:38 Christian Borntraeger
  2003-10-06 17:41 ` Greg KH
  0 siblings, 1 reply; 34+ messages in thread
From: Christian Borntraeger @ 2003-10-06 17:38 UTC (permalink / raw)
  To: Greg KH; +Cc: Al Viro, Patrick Mochel, LKML, Dipankar Sarma

Greg KH wrote:

> On Mon, Oct 06, 2003 at 02:29:15PM +0530, Maneesh Soni wrote:
>> 
>> 2.6.0-test6          With patches.
>> -----------------
>> dentry_cache (active)                2520                    2544
>> inode_cache (active)         1058                    1050
>> LowFree                      875032 KB               874748 KB
> 
> So with these patches we actually eat up more LowFree if all sysfs
> entries are searched, and make the dentry_cache bigger?  That's not good
> :(
[...]
> information for that kobject.  So I don't see any savings in these
> patches, do you?

I do. As stated earlier, with 20000 devices on a S390 guest I have around 
350MB slab memory after rebooting. 
With this patch, the slab memory reduces to 60MB. 
This becomes even more nasty as the kernel crashes during bootup if I only 
spend 256M for this guest: (happens with the current sysfs, not with this 
patch)

fixpoint divide exception: 0009 ¬#1| 
CPU:    0    Not tainted 
Process cio/0 (pid: 18, task: 000000000b84a810, ksp: 000000000b81f0a8) 
Krnl PSW : 0700000180000000 0000000000066aa2 
Krnl GPRS: 000000000000245e 0000000000000000 0000000000000000 
0000000000000000 
           00000000003b5110 0000000000000000 0000000000000000 
000000000030c008 
           0000000000000044 0000000000000020 000000000030be00 
00000000009fb8b0 
           00000000009fb880 00000000002b12b0 00000000000668f0 
000000000b81f0a8 
Krnl ACRS: 00000000 00000000 00000000 00000000 
           00000000 00000000 00000000 00000000 
           00000000 00000000 00000000 00000000 
           00000000 00000000 00000000 00000000 
Krnl Code: eb 13 00 3f 00 0c b9 08 00 13 58 40 a4 04 a7 28 00 64 8a 20 
Call Trace: 
 ¬<00000000000671c2>| shrink_zone+0x9e/0xc4 
 ¬<00000000000672c2>| shrink_caches+0xda/0xf4 
 ¬<00000000000673ae>| try_to_free_pages+0xd2/0x1b4 
 ¬<000000000005d812>| __alloc_pages+0x2aa/0x48c 
 ¬<000000000005da42>| __get_free_pages+0x4e/0x8c 
 ¬<0000000000061bfa>| cache_grow+0x116/0x40c 
 ¬<00000000000620ec>| cache_alloc_refill+0x1fc/0x328 
 ¬<000000000006258a>| kmem_cache_alloc+0xa2/0xb0 
 ¬<000000000009e094>| alloc_inode+0x1bc/0x1c0 
 ¬<000000000009ee40>| new_inode+0x20/0xb0 
 ¬<00000000000c50bc>| sysfs_new_inode+0x2c/0xb4 
 ¬<00000000000c519a>| sysfs_create+0x56/0xe0 
 ¬<00000000000c5bba>| sysfs_add_file+0xd2/0xf8 
 ¬<00000000000c6dce>| create_files+0x3e/0x84 
 ¬<00000000000c6e80>| sysfs_create_group+0x6c/0xe4 
 ¬<000000000016a508>| io_subchannel_register+0x54/0xec 
 ¬<000000000004b5ce>| worker_thread+0x21e/0x31c 
 ¬<0000000000019b68>| kernel_thread_starter+0x14/0x1c 

I agree that this patch is still borked, even some of the s390 device dont 
work. Nevertheless,  the idea to make this dentry/inode-cache memory 
freeable is good. I dont know why, but each device currently each device 
eats much more slab memory than a pagesize.
As far as I understood the mail of Dipankar, his patch is more a 
proof-of-concept not a mergable patch. If we find another solution to 
reduce the memory consumption of sysfs, I would be happy to accept 
different ideas.

cheers Christian

-- 
Mit freundlichen Grüßen / Best Regards

Christian Bornträger
IBM Deutschland Entwicklung GmbH
eServer SW  System Evaluation + Test
email: CBORNTRA@de.ibm.com
Tel +49  7031 16 1975


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 17:31   ` Dipankar Sarma
@ 2003-10-06 17:38     ` Greg KH
  2003-10-06 18:01       ` Dipankar Sarma
  0 siblings, 1 reply; 34+ messages in thread
From: Greg KH @ 2003-10-06 17:38 UTC (permalink / raw)
  To: Dipankar Sarma; +Cc: Maneesh Soni, Al Viro, Patrick Mochel, LKML

On Mon, Oct 06, 2003 at 11:01:11PM +0530, Dipankar Sarma wrote:
> On Mon, Oct 06, 2003 at 09:08:46AM -0700, Greg KH wrote:
> > On Mon, Oct 06, 2003 at 02:29:15PM +0530, Maneesh Soni wrote:
> > > 
> > > 				2.6.0-test6		With patches.
> > > -----------------
> > > dentry_cache (active)		2520			2544
> > > inode_cache (active)		1058			1050
> > > LowFree			875032 KB		874748 KB
> > 
> > So with these patches we actually eat up more LowFree if all sysfs
> > entries are searched, and make the dentry_cache bigger?  That's not good :(
> 
> My guess is that those 24 dentries are just noise. What we should
> do is verify with a large number of devices if the numbers are all
> that different after a walk of the sysfs tree.

Ok, a better test would be with a _lot_ of devices.  Care to test with a
lot of scsi debug devices?

> > Remember, every kobject that's created will cause a call to
> > /sbin/hotplug which will cause udev to walk the sysfs tree to get the
> > information for that kobject.  So I don't see any savings in these
> > patches, do you?
> 
> Assuming that unused files/dirs are aged out of dentry and inode cache,
> it should benefit. The numbers you should look at are -
> 
> --------------------------------------------------------
> After mounting sysfs
> -------------------
> dentry_cache (active)           2350                    1321
> inode_cache (active)            1058                    31
> LowFree                         875096 KB               875836 KB
> --------------------------------------------------------
> 
> That saves ~800KB. If you just mount sysfs and use a few files, you
> aren't eating up dentries and inodes for every file in sysfs. How often 
> do you expect hotplug events to happen in a system ?

Every kobject that is created and is associated with a subsystem
generates a hotplug call.  So that's about every kobject that we care
about here :)

> Some time after a hotplug event, dentries/inodes will get aged out and
> then you should see savings. It should greatly benefit in a normal
> system.

Can you show this happening?

> Now if the additional kobjects cause problems with userland hotplug, then 
> that needs to be resolved. However that seems to be a different problem 
> altogether. Could you please elaborate on that ?

No, I don't think the additional ones you have added will cause
problems, but can you verify this?  Just log all hotplug events
happening in your system (point /proc/sys/kernel/hotplug to a simple
logging program).

But again, I don't think the added overhead you have added to a kobject
is acceptable for not much gain for the normal case (systems without a
zillion devices.)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 17:38 [RFC 0/6] Backing Store for sysfs Christian Borntraeger
@ 2003-10-06 17:41 ` Greg KH
  2003-10-06 18:00   ` Kevin P. Fleming
  0 siblings, 1 reply; 34+ messages in thread
From: Greg KH @ 2003-10-06 17:41 UTC (permalink / raw)
  To: Christian Borntraeger; +Cc: Al Viro, Patrick Mochel, LKML, Dipankar Sarma

On Mon, Oct 06, 2003 at 07:38:06PM +0200, Christian Borntraeger wrote:
> Greg KH wrote:
> 
> > On Mon, Oct 06, 2003 at 02:29:15PM +0530, Maneesh Soni wrote:
> >> 
> >> 2.6.0-test6          With patches.
> >> -----------------
> >> dentry_cache (active)                2520                    2544
> >> inode_cache (active)         1058                    1050
> >> LowFree                      875032 KB               874748 KB
> > 
> > So with these patches we actually eat up more LowFree if all sysfs
> > entries are searched, and make the dentry_cache bigger?  That's not good
> > :(
> [...]
> > information for that kobject.  So I don't see any savings in these
> > patches, do you?
> 
> I do. As stated earlier, with 20000 devices on a S390 guest I have around 
> 350MB slab memory after rebooting. 
> With this patch, the slab memory reduces to 60MB. 

That's good.  But what happens after you run a find over the sysfs tree?
Which is essencially what udev will be doing :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 17:41 ` Greg KH
@ 2003-10-06 18:00   ` Kevin P. Fleming
  2003-10-06 18:11     ` Greg KH
  0 siblings, 1 reply; 34+ messages in thread
From: Kevin P. Fleming @ 2003-10-06 18:00 UTC (permalink / raw)
  To: Greg KH
  Cc: Christian Borntraeger, Al Viro, Patrick Mochel, LKML,
	Dipankar Sarma

Greg KH wrote:

> 
> That's good.  But what happens after you run a find over the sysfs tree?
> Which is essencially what udev will be doing :)
> 

This sounds like an opportunity to improve the udev<->sysfs 
interaction. Does the hotplug event not give udev enough information 
to avoid this "find" search?


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 17:38     ` Greg KH
@ 2003-10-06 18:01       ` Dipankar Sarma
  2003-10-06 18:09         ` Greg KH
  0 siblings, 1 reply; 34+ messages in thread
From: Dipankar Sarma @ 2003-10-06 18:01 UTC (permalink / raw)
  To: Greg KH; +Cc: Maneesh Soni, Al Viro, Patrick Mochel, LKML

On Mon, Oct 06, 2003 at 10:38:58AM -0700, Greg KH wrote:
> On Mon, Oct 06, 2003 at 11:01:11PM +0530, Dipankar Sarma wrote:
> > My guess is that those 24 dentries are just noise. What we should
> > do is verify with a large number of devices if the numbers are all
> > that different after a walk of the sysfs tree.
> 
> Ok, a better test would be with a _lot_ of devices.  Care to test with a
> lot of scsi debug devices?

Sure. At the same time, as Maneesh pointed out, this is just an
RFC. The backing store design probably needs quite some work first.

> > --------------------------------------------------------
> > After mounting sysfs
> > -------------------
> > dentry_cache (active)           2350                    1321
> > inode_cache (active)            1058                    31
> > LowFree                         875096 KB               875836 KB
> > --------------------------------------------------------
> > 
> > That saves ~800KB. If you just mount sysfs and use a few files, you
> > aren't eating up dentries and inodes for every file in sysfs. How often 
> > do you expect hotplug events to happen in a system ?
> 
> Every kobject that is created and is associated with a subsystem
> generates a hotplug call.  So that's about every kobject that we care
> about here :)

That would not happen in a normal running system often, right ? So,
I don't see the point looking at mem usage after hotplug events.

> 
> > Some time after a hotplug event, dentries/inodes will get aged out and
> > then you should see savings. It should greatly benefit in a normal
> > system.
> 
> Can you show this happening?

It should be easy to demonstrate. That is how dentries/inodes
work for on-disk filesystems. If Maneesh's patch didn't work that
way, then the whole point is lost. I hope that is not the case.

> 
> > Now if the additional kobjects cause problems with userland hotplug, then 
> > that needs to be resolved. However that seems to be a different problem 
> > altogether. Could you please elaborate on that ?
> 
> No, I don't think the additional ones you have added will cause
> problems, but can you verify this?  Just log all hotplug events
> happening in your system (point /proc/sys/kernel/hotplug to a simple
> logging program).
> 
> But again, I don't think the added overhead you have added to a kobject
> is acceptable for not much gain for the normal case (systems without a
> zillion devices.)

IIRC, Maneesh test machine is a 2-way P4 xeon with six scsi disks and savings
are of about 800KB. That is as normal a case as it gets, I think.
It only gets better as you have more devices in your system.

Thanks
Dipankar

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 18:01       ` Dipankar Sarma
@ 2003-10-06 18:09         ` Greg KH
  2003-10-06 18:31           ` Dipankar Sarma
  0 siblings, 1 reply; 34+ messages in thread
From: Greg KH @ 2003-10-06 18:09 UTC (permalink / raw)
  To: Dipankar Sarma; +Cc: Maneesh Soni, Al Viro, Patrick Mochel, LKML

On Mon, Oct 06, 2003 at 11:31:19PM +0530, Dipankar Sarma wrote:
> On Mon, Oct 06, 2003 at 10:38:58AM -0700, Greg KH wrote:
> > On Mon, Oct 06, 2003 at 11:01:11PM +0530, Dipankar Sarma wrote:
> > > --------------------------------------------------------
> > > After mounting sysfs
> > > -------------------
> > > dentry_cache (active)           2350                    1321
> > > inode_cache (active)            1058                    31
> > > LowFree                         875096 KB               875836 KB
> > > --------------------------------------------------------
> > > 
> > > That saves ~800KB. If you just mount sysfs and use a few files, you
> > > aren't eating up dentries and inodes for every file in sysfs. How often 
> > > do you expect hotplug events to happen in a system ?
> > 
> > Every kobject that is created and is associated with a subsystem
> > generates a hotplug call.  So that's about every kobject that we care
> > about here :)
> 
> That would not happen in a normal running system often, right ? So,
> I don't see the point looking at mem usage after hotplug events.

No.  My main point is that for every hotplug event (which is caused by a
kobject being created or destroyed), udev will run and look at the sysfs
entry for the kobject (by using libsysfs which reads in all of the
kobject information, including attributes).  This is a normal event, so
we have to care about what happens after running 'find' on the sysfs
tree as that is basically what will always happen.

Does that make more sense?  We can't just look at what happens with this
patch without actually accessing all of the sysfs tree, as that will be
the "normal" case.

> > > Some time after a hotplug event, dentries/inodes will get aged out and
> > > then you should see savings. It should greatly benefit in a normal
> > > system.
> > 
> > Can you show this happening?
> 
> It should be easy to demonstrate. That is how dentries/inodes
> work for on-disk filesystems. If Maneesh's patch didn't work that
> way, then the whole point is lost. I hope that is not the case.

Me too.  It's just that the free memory numbers didn't show much gain
with this patch on his system.  That worries me.

> > > Now if the additional kobjects cause problems with userland hotplug, then 
> > > that needs to be resolved. However that seems to be a different problem 
> > > altogether. Could you please elaborate on that ?
> > 
> > No, I don't think the additional ones you have added will cause
> > problems, but can you verify this?  Just log all hotplug events
> > happening in your system (point /proc/sys/kernel/hotplug to a simple
> > logging program).
> > 
> > But again, I don't think the added overhead you have added to a kobject
> > is acceptable for not much gain for the normal case (systems without a
> > zillion devices.)
> 
> IIRC, Maneesh test machine is a 2-way P4 xeon with six scsi disks and savings
> are of about 800KB. That is as normal a case as it gets, I think.
> It only gets better as you have more devices in your system.

800Kb after running find?  I don't see that :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 18:00   ` Kevin P. Fleming
@ 2003-10-06 18:11     ` Greg KH
  2003-10-06 18:23       ` Kevin P. Fleming
  0 siblings, 1 reply; 34+ messages in thread
From: Greg KH @ 2003-10-06 18:11 UTC (permalink / raw)
  To: Kevin P. Fleming
  Cc: Christian Borntraeger, Al Viro, Patrick Mochel, LKML,
	Dipankar Sarma

On Mon, Oct 06, 2003 at 11:00:40AM -0700, Kevin P. Fleming wrote:
> Greg KH wrote:
> 
> >
> >That's good.  But what happens after you run a find over the sysfs tree?
> >Which is essencially what udev will be doing :)
> >
> 
> This sounds like an opportunity to improve the udev<->sysfs 
> interaction. Does the hotplug event not give udev enough information 
> to avoid this "find" search?

The hotplug event points to the sysfs location of the kobject, that's
all.  libsysfs then takes that kobject location and sucks up all of the
attribute information for that kobject, which udev then uses to
determine what it should do.

Unless we want to pass all attribute information through hotplug, which
we do not.

Do you have any suggestions?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
@ 2003-10-06 18:19 Christian Borntraeger
  0 siblings, 0 replies; 34+ messages in thread
From: Christian Borntraeger @ 2003-10-06 18:19 UTC (permalink / raw)
  To: Greg KH; +Cc: Al Viro, Patrick Mochel, LKML, dipankar

Hi Greg,

I just did a test run. There is still more free memory than with a stock 
kernel. I guess some cache entries aged dropped out of existence.
I guess more cache entries will be removed if I put memory pressure on the 
system.
Please correct me, if I am wrong, but sysfs dentry and inode caches are 
currently unswappable, right?
But now to the results:


------standard after boot----------
cat /proc/meminfo 
MemTotal:       795612 kB 
MemFree:        175904 kB 
Buffers:          2620 kB 
Cached:         257948 kB 
SwapCached:          0 kB 
Active:          11280 kB 
Inactive:       251392 kB 
HighTotal:           0 kB 
HighFree:            0 kB 
LowTotal:       795612 kB 
LowFree:        175904 kB 
SwapTotal:     1355416 kB 
SwapFree:      1355416 kB 
Dirty:            1044 kB 
Writeback:           0 kB 
Mapped:           5032 kB 
Slab:           346220 kB 
Committed_AS:     4580 kB 
PageTables:        140 kB 
VmallocTotal: 4294139904 kB 
VmallocUsed:      2108 kB 
VmallocChunk: 4294137796 kB 



------with patch after boot-----------
cat /proc/meminfo 
MemTotal:       795612 kB 
MemFree:        702416 kB 
Buffers:          2604 kB 
Cached:          17328 kB 
SwapCached:          0 kB 
Active:          11040 kB 
Inactive:        11080 kB 
HighTotal:           0 kB 
HighFree:            0 kB 
LowTotal:       795612 kB 
LowFree:        702416 kB 
SwapTotal:     1355416 kB 
SwapFree:      1355416 kB 
Dirty:            1040 kB 
Writeback:           0 kB 
Mapped:           5016 kB 
Slab:            61004 kB 
Committed_AS:     4580 kB 
PageTables:        136 kB 
VmallocTotal: 4294139904 kB 
VmallocUsed:      1308 kB 
VmallocChunk: 4294138596 kB 

------with patch after find /sys-----------
cat /proc/meminfo 
MemTotal:       795612 kB 
MemFree:        312312 kB 
Buffers:          2568 kB 
Cached:         257868 kB 
SwapCached:          0 kB 
Active:          11284 kB 
Inactive:       251304 kB 
HighTotal:           0 kB 
HighFree:            0 kB 
LowTotal:       795612 kB 
LowFree:        312312 kB 
SwapTotal:     1355416 kB 
SwapFree:      1355416 kB 
Dirty:               0 kB 
Writeback:           0 kB 
Mapped:           5016 kB 
Slab:           210608 kB 
Committed_AS:     4580 kB 
PageTables:        136 kB 
VmallocTotal: 4294139904 kB 
VmallocUsed:      1308 kB 
VmallocChunk: 4294138596 kB 
¬root§53v15g05 root|# 

By the way, I noticed, that this patch slows down the find.

cheers

-- 
Mit freundlichen Grüßen / Best Regards

Christian Bornträger
IBM Deutschland Entwicklung GmbH
eServer SW  System Evaluation + Test
email: CBORNTRA@de.ibm.com
Tel +49  7031 16 1975


To:     Christian Borntraeger/Germany/IBM@IBMDE
cc:     Al Viro <viro@parcelfarce.linux.theplanet.co.uk>, Patrick Mochel 
<mochel@osdl.org>, LKML <linux-kernel@vger.kernel.org>, 
dipankar@in.ltcfwd.linux.ibm.com 
Subject:        Re: [RFC 0/6] Backing Store for sysfs


On Mon, Oct 06, 2003 at 07:38:06PM +0200, Christian Borntraeger wrote:
> Greg KH wrote:
> 
> > On Mon, Oct 06, 2003 at 02:29:15PM +0530, Maneesh Soni wrote:
> >> 
> >> 2.6.0-test6          With patches.
> >> -----------------
> >> dentry_cache (active)                2520                    2544
> >> inode_cache (active)         1058                    1050
> >> LowFree                      875032 KB               874748 KB
> > 
> > So with these patches we actually eat up more LowFree if all sysfs
> > entries are searched, and make the dentry_cache bigger?  That's not 
good
> > :(
> [...]
> > information for that kobject.  So I don't see any savings in these
> > patches, do you?
> 
> I do. As stated earlier, with 20000 devices on a S390 guest I have 
around 
> 350MB slab memory after rebooting. 
> With this patch, the slab memory reduces to 60MB. 

That's good.  But what happens after you run a find over the sysfs tree?
Which is essencially what udev will be doing :)

thanks,

greg k-h




^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 18:11     ` Greg KH
@ 2003-10-06 18:23       ` Kevin P. Fleming
  2003-10-06 18:30         ` Greg KH
  0 siblings, 1 reply; 34+ messages in thread
From: Kevin P. Fleming @ 2003-10-06 18:23 UTC (permalink / raw)
  To: Greg KH
  Cc: Christian Borntraeger, Al Viro, Patrick Mochel, LKML,
	Dipankar Sarma

Greg KH wrote:

> The hotplug event points to the sysfs location of the kobject, that's
> all.  libsysfs then takes that kobject location and sucks up all of the
> attribute information for that kobject, which udev then uses to
> determine what it should do.

This sounds like a very different issue than what I thought you said 
originally. Your other message said a "find over the sysfs tree", 
implying some sort of tree-wide search for relevant information. In 
fact, the "find" is only for attributes in the directory owned by the 
kobject, right? Once they have been "found", they will age out of the 
dentry/inode cache just like any other search results.

> 
> Unless we want to pass all attribute information through hotplug, which
> we do not.
> 

I agree, that would be difficult and hard to manage.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 18:23       ` Kevin P. Fleming
@ 2003-10-06 18:30         ` Greg KH
  2003-10-06 18:38           ` Kevin P. Fleming
  2003-10-07  8:30           ` Maneesh Soni
  0 siblings, 2 replies; 34+ messages in thread
From: Greg KH @ 2003-10-06 18:30 UTC (permalink / raw)
  To: Kevin P. Fleming
  Cc: Christian Borntraeger, Al Viro, Patrick Mochel, LKML,
	Dipankar Sarma

On Mon, Oct 06, 2003 at 11:23:53AM -0700, Kevin P. Fleming wrote:
> Greg KH wrote:
> 
> >The hotplug event points to the sysfs location of the kobject, that's
> >all.  libsysfs then takes that kobject location and sucks up all of the
> >attribute information for that kobject, which udev then uses to
> >determine what it should do.
> 
> This sounds like a very different issue than what I thought you said 
> originally. Your other message said a "find over the sysfs tree", 
> implying some sort of tree-wide search for relevant information. In 
> fact, the "find" is only for attributes in the directory owned by the 
> kobject, right? Once they have been "found", they will age out of the 
> dentry/inode cache just like any other search results.

They might, depending on the patch implementation.  And no, the issue
isn't different, as we have to show the memory usage after all kobjects
are accessed in sysfs from userspace, not just before, like some of the
measurements are, in order to try to compare apples to apples.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 18:09         ` Greg KH
@ 2003-10-06 18:31           ` Dipankar Sarma
  2003-10-06 18:34             ` Greg KH
  0 siblings, 1 reply; 34+ messages in thread
From: Dipankar Sarma @ 2003-10-06 18:31 UTC (permalink / raw)
  To: Greg KH; +Cc: Maneesh Soni, Al Viro, Patrick Mochel, LKML

On Mon, Oct 06, 2003 at 11:09:07AM -0700, Greg KH wrote:
> On Mon, Oct 06, 2003 at 11:31:19PM +0530, Dipankar Sarma wrote:
> No.  My main point is that for every hotplug event (which is caused by a
> kobject being created or destroyed), udev will run and look at the sysfs
> entry for the kobject (by using libsysfs which reads in all of the
> kobject information, including attributes).  This is a normal event, so
> we have to care about what happens after running 'find' on the sysfs
> tree as that is basically what will always happen.
> 
> Does that make more sense?  We can't just look at what happens with this
> patch without actually accessing all of the sysfs tree, as that will be
> the "normal" case.

That sounds odd. So, udev essentially results in a frequent and continuous
"find /sys" ? That doesn't sound good. You are unnecessarily adding
pressure on vfs (dcache specially). We will discuss this offline then
and see what needs to be done.

> > > Can you show this happening?
> > 
> > It should be easy to demonstrate. That is how dentries/inodes
> > work for on-disk filesystems. If Maneesh's patch didn't work that
> > way, then the whole point is lost. I hope that is not the case.
> 
> Me too.  It's just that the free memory numbers didn't show much gain
> with this patch on his system.  That worries me.

Well, Maneesh didn't post numbers after letting the system age out
sysfs dentries/inodes. Maneesh can you post some such numbers ?


> > > But again, I don't think the added overhead you have added to a kobject
> > > is acceptable for not much gain for the normal case (systems without a
> > > zillion devices.)
> > 
> > IIRC, Maneesh test machine is a 2-way P4 xeon with six scsi disks and savings
> > are of about 800KB. That is as normal a case as it gets, I think.
> > It only gets better as you have more devices in your system.
> 
> 800Kb after running find?  I don't see that :)

No, those numbers were for just mounting sysfs. More numbers tomorrow.

Thanks
Dipankar

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 18:31           ` Dipankar Sarma
@ 2003-10-06 18:34             ` Greg KH
  2003-10-07  9:08               ` Andreas Jellinghaus
  0 siblings, 1 reply; 34+ messages in thread
From: Greg KH @ 2003-10-06 18:34 UTC (permalink / raw)
  To: Dipankar Sarma; +Cc: Maneesh Soni, Al Viro, Patrick Mochel, LKML

On Tue, Oct 07, 2003 at 12:01:32AM +0530, Dipankar Sarma wrote:
> On Mon, Oct 06, 2003 at 11:09:07AM -0700, Greg KH wrote:
> > On Mon, Oct 06, 2003 at 11:31:19PM +0530, Dipankar Sarma wrote:
> > No.  My main point is that for every hotplug event (which is caused by a
> > kobject being created or destroyed), udev will run and look at the sysfs
> > entry for the kobject (by using libsysfs which reads in all of the
> > kobject information, including attributes).  This is a normal event, so
> > we have to care about what happens after running 'find' on the sysfs
> > tree as that is basically what will always happen.
> > 
> > Does that make more sense?  We can't just look at what happens with this
> > patch without actually accessing all of the sysfs tree, as that will be
> > the "normal" case.
> 
> That sounds odd. So, udev essentially results in a frequent and continuous
> "find /sys" ? That doesn't sound good. You are unnecessarily adding
> pressure on vfs (dcache specially). We will discuss this offline then
> and see what needs to be done.

No, not a 'find', we look up the kobject that was added, and its
attributes.  Doing a 'find' will emulate this for your tests, that's
all.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 18:30         ` Greg KH
@ 2003-10-06 18:38           ` Kevin P. Fleming
  2003-10-07  8:30           ` Maneesh Soni
  1 sibling, 0 replies; 34+ messages in thread
From: Kevin P. Fleming @ 2003-10-06 18:38 UTC (permalink / raw)
  To: Greg KH
  Cc: Christian Borntraeger, Al Viro, Patrick Mochel, LKML,
	Dipankar Sarma

Greg KH wrote:

> They might, depending on the patch implementation.  And no, the issue
> isn't different, as we have to show the memory usage after all kobjects
> are accessed in sysfs from userspace, not just before, like some of the
> measurements are, in order to try to compare apples to apples.
> 

My point in saying that they are different was that your original 
message implied each hotplug event would be walking most (or all) of 
the sysfs tree _each time_, thus effectively touching all the dentries 
and inodes in the cache. In actuality during system startup it will 
appear that this is the case as all the hotplug events occur, but once 
that flurry is over the caches can release the unused entries. Later 
hotplug events would only bring in the entries relevant to the 
specific kobject that the event relates to, so would cause minimal 
cache pressure.

Now that I understand how you're expecting hotplug/udev to interact 
I'll bow out of this thread... I can't even begin to understand the 
complexities of the patch that's been posted :-)


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06  8:59 Maneesh Soni
  2003-10-06 16:08 ` Greg KH
@ 2003-10-06 18:44 ` Patrick Mochel
  2003-10-06 19:27   ` Dipankar Sarma
  1 sibling, 1 reply; 34+ messages in thread
From: Patrick Mochel @ 2003-10-06 18:44 UTC (permalink / raw)
  To: Maneesh Soni; +Cc: Al Viro, Greg KH, LKML, Dipankar Sarma


> The following patch set(mailed separately) provides a prototype for a backing 
> store mechanism for sysfs. Currently sysfs pins all its dentries and inodes in 
> memory there by wasting kernel lowmem even when it is not mounted. 
> 
> With this patch set we create sysfs dentry whenever it is required like 
> other real filesystems and, age and free it as per the dcache rules. We
> now save significant amount of Lowmem by avoiding un-necessary pinning. 
> The following numbers were on a 2-way system with 6 disks and 2 NICs with 
> about 1028 dentries. The numbers are just indicative as they are system
> wide collected from /proc and are not exclusively for sysfs.

No thanks. 

First off, I'm not philosophically opposed to the concept of reducing 
sysfs and kobject memory usage. I think it can be gracefully done, though 
I don't think this is quite the solution, and I don't have one myself.. 

Now, you would really only problems when you have a large number of
devices and a limited amount of a Lowmem. I.e. it's only a problem on
large systems with 32-bit processors. And, the traditional arguments
against this situation is to a) use and promote 64-bit platforms and b)
that if you have that many devices, you (or your customers) can surely
afford enough memory to make the sysfs footprint a non-issue.

Concerning the patch, I really don't like it. I look at the kobject and 
sysfs code with the assumption in my mind that the objects are already too 
large and the code more complex than it should be. Adding to this is not 
the right approach, just as a general rule of thumb. 

Also, I don't think that increasing the co-dependency bewteen the kobject
and sysfs hierarchies is the right thing to do. They each have one pointer
back to the corresponding location in the other tree, which is about as
lightweight as you can get. Adding more only increases bloat for kobjects 
that are not represented in sysfs, and increases the total overhead of the 
entire system. 

As I said before, I don't know the right solution, but the directions to 
look in are related to attribute groups. Attributes definitely consume the 
most amount of memory (as opposed to the kobject hierachy), so delaying 
their creation would help, hopefully without making the interface too 
awkward. 

You can also use the assumption that an attribute group exists for all the 
kobjects in a kset, and that a kobject knows what kset it belongs to. And
that eventually, all attributes should be added as part of an attribute 
group..


	Pat




^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
       [not found]         ` <DI7S.58w.13@gated-at.bofh.it>
@ 2003-10-06 19:01           ` Pascal Schmidt
  2003-10-06 19:10             ` Greg KH
  0 siblings, 1 reply; 34+ messages in thread
From: Pascal Schmidt @ 2003-10-06 19:01 UTC (permalink / raw)
  To: Greg KH; +Cc: linux-kernel

On Mon, 06 Oct 2003 20:20:16 +0200, you wrote in linux.kernel:

> Does that make more sense?  We can't just look at what happens with this
> patch without actually accessing all of the sysfs tree, as that will be
> the "normal" case.

Well, the normal case for me and other people not using any hot-pluggable
devices will be to run a hotplug agent that does absolutely nothing... so
in my case, the proposed patch would help - more memory available for the
normal work I do.

With a static /dev and no hotpluggable stuff around, there is no need
for and hotplug agent being there at all. And I do think such system
are not too uncommon, so considering them would probably be nice.

-- 
Ciao,
Pascal

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 19:01           ` Pascal Schmidt
@ 2003-10-06 19:10             ` Greg KH
  2003-10-07  0:15               ` Pascal Schmidt
  0 siblings, 1 reply; 34+ messages in thread
From: Greg KH @ 2003-10-06 19:10 UTC (permalink / raw)
  To: Pascal Schmidt; +Cc: linux-kernel

On Mon, Oct 06, 2003 at 09:01:40PM +0200, Pascal Schmidt wrote:
> On Mon, 06 Oct 2003 20:20:16 +0200, you wrote in linux.kernel:
> 
> > Does that make more sense?  We can't just look at what happens with this
> > patch without actually accessing all of the sysfs tree, as that will be
> > the "normal" case.
> 
> Well, the normal case for me and other people not using any hot-pluggable
> devices will be to run a hotplug agent that does absolutely nothing... so
> in my case, the proposed patch would help - more memory available for the
> normal work I do.
> 
> With a static /dev and no hotpluggable stuff around, there is no need
> for and hotplug agent being there at all. And I do think such system
> are not too uncommon, so considering them would probably be nice.

Systems like this are not uncommon, I agree.  But also for systems like
this, the current code works just fine (small number of fixed devices.)
I haven't heard anyone complain about memory usage for a normal system
(99.9% of the systems out there.)

Also,  remember that in 2.7 I'm going to make device numbers random so
you will have to use something like udev to control your /dev tree.
Slowly weaning yourself off of a static /dev during the next 2 years or
so might be a good idea :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 18:44 ` Patrick Mochel
@ 2003-10-06 19:27   ` Dipankar Sarma
  2003-10-06 19:30     ` viro
  2003-10-06 19:33     ` Patrick Mochel
  0 siblings, 2 replies; 34+ messages in thread
From: Dipankar Sarma @ 2003-10-06 19:27 UTC (permalink / raw)
  To: Patrick Mochel; +Cc: Maneesh Soni, Al Viro, Greg KH, LKML

On Mon, Oct 06, 2003 at 11:44:14AM -0700, Patrick Mochel wrote:
> First off, I'm not philosophically opposed to the concept of reducing 
> sysfs and kobject memory usage. I think it can be gracefully done, though 
> I don't think this is quite the solution, and I don't have one myself.. 

Let's look at it this way - unless you find a way to save sizeof(struct dentry)
+ sizeof(struct inode) in every kobject/attr etc., there is no
way you can beat ageing of dentries/inodes.

> Now, you would really only problems when you have a large number of
> devices and a limited amount of a Lowmem. I.e. it's only a problem on
> large systems with 32-bit processors. And, the traditional arguments
> against this situation is to a) use and promote 64-bit platforms and b)
> that if you have that many devices, you (or your customers) can surely
> afford enough memory to make the sysfs footprint a non-issue.

That is not a very realistic argument, ia32 customers will likely
run systems with large number of disks and will have lowmem problem.
Besides that think about the added complexity of lookups due to
all those pinned dentries forever residing in dentry hash table.

> 
> Concerning the patch, I really don't like it. I look at the kobject and 
> sysfs code with the assumption in my mind that the objects are already too 
> large and the code more complex than it should be. Adding to this is not 
> the right approach, just as a general rule of thumb. 
> 
> Also, I don't think that increasing the co-dependency bewteen the kobject
> and sysfs hierarchies is the right thing to do. They each have one pointer
> back to the corresponding location in the other tree, which is about as
> lightweight as you can get. Adding more only increases bloat for kobjects 
> that are not represented in sysfs, and increases the total overhead of the 
> entire system. 

I don't see how you can claim that the total overhead of the entire
system is high. See Christian's numbers. The point here is pretty
straightforward -

sysfs currently uses dentries to represent filesystem hierarchy.
We want to create the dentries on the fly and age them out.
So, we can no longer use dentries to represent filesystem hierarchy.
Now, *something* has to represent the actual filesystem
hierarchy, so that dentries/inodes can be created on a lookup
miss based on that. So, what do you do here ? kobject and
its associates already represent most of the information necessary
for a backing store. Maneesh just added a little more to complete
what is equivalent of a on-disk filesystem. This allows vfs to
create dentries and inodes on the fly and age later on. Granted
that there are probably ugliness in the design to be sorted out
and it may have taken kobjects in a slightly different direction
than earlier, but it is not that odd when you look at it
from the VFS point of view.

> 
> You can also use the assumption that an attribute group exists for all the 
> kobjects in a kset, and that a kobject knows what kset it belongs to. And
> that eventually, all attributes should be added as part of an attribute 
> group..

As I said before, no matter how much you save on kobjects and attrs,
I can't see how you can account for ageing of dentries and inodes.
Please look at it from the VFS angle and see if there is a better
way to represent kobjects/attrs in order to create dentries/inodes
on demand and age later.

Thanks
Dipankar

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 19:27   ` Dipankar Sarma
@ 2003-10-06 19:30     ` viro
  2003-10-06 20:01       ` Dipankar Sarma
  2003-10-07  4:47       ` Maneesh Soni
  2003-10-06 19:33     ` Patrick Mochel
  1 sibling, 2 replies; 34+ messages in thread
From: viro @ 2003-10-06 19:30 UTC (permalink / raw)
  To: Dipankar Sarma; +Cc: Patrick Mochel, Maneesh Soni, Greg KH, LKML

On Tue, Oct 07, 2003 at 12:57:13AM +0530, Dipankar Sarma wrote:
 
> Let's look at it this way - unless you find a way to save sizeof(struct dentry)
> + sizeof(struct inode) in every kobject/attr etc., there is no
> way you can beat ageing of dentries/inodes.
 
> sysfs currently uses dentries to represent filesystem hierarchy.
> We want to create the dentries on the fly and age them out.
> So, we can no longer use dentries to represent filesystem hierarchy.
> Now, *something* has to represent the actual filesystem
> hierarchy, so that dentries/inodes can be created on a lookup
> miss based on that. So, what do you do here ? kobject and
> its associates already represent most of the information necessary
> for a backing store. Maneesh just added a little more to complete
> what is equivalent of a on-disk filesystem. This allows vfs to
> create dentries and inodes on the fly and age later on. Granted
> that there are probably ugliness in the design to be sorted out
> and it may have taken kobjects in a slightly different direction
> than earlier, but it is not that odd when you look at it
> from the VFS point of view.

Rot.  First of all, *not* *all* *kobjects* *are* *in* *sysfs*.  And these
are pure loss in your case.

What's more important, for leaves of the sysfs tree your overhead is also
a loss - we don't need to pin dentry down for them even with current sysfs
design.   And that can be done with minimal code changes and no data changes
at all.  Your patch will have to be more attractive than that.  What's the
expected ratio of directories to non-directories in sysfs?

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 19:27   ` Dipankar Sarma
  2003-10-06 19:30     ` viro
@ 2003-10-06 19:33     ` Patrick Mochel
  2003-10-06 20:26       ` Dipankar Sarma
  1 sibling, 1 reply; 34+ messages in thread
From: Patrick Mochel @ 2003-10-06 19:33 UTC (permalink / raw)
  To: Dipankar Sarma; +Cc: Maneesh Soni, Al Viro, Greg KH, LKML


> That is not a very realistic argument, ia32 customers will likely
> run systems with large number of disks and will have lowmem problem.

It's not a realistic requirement for me to solve your customer problems. 
:) I've been involved in this argument before, and the arguments have been 
the same, pretty much along party lines of IBM vs. Everyone else. I'm not 
here to point fingers, but you must heed the fact that we've been here 
before. 

> Besides that think about the added complexity of lookups due to
> all those pinned dentries forever residing in dentry hash table.

Well, along with more memory and more devices, I would expect your 
customers to also be paying for the fastest processors. :) 

> I don't see how you can claim that the total overhead of the entire
> system is high. See Christian's numbers. The point here is pretty
> straightforward -

Uh, let's recap: 

" This ended up increasing the size of kobject from 52 bytes to 108 bytes
but saving one dentry and inode per kobject. " 

Under one usage model, you've saved memory, however under the case where 
kobjects are used and not represnted in sysfs, you've more than doubled 
the overhead, and you've increased the total overhead in the worst case - 
when all dentries are looked up and pinned. 

> sysfs currently uses dentries to represent filesystem hierarchy.
> We want to create the dentries on the fly and age them out.
> So, we can no longer use dentries to represent filesystem hierarchy.
> Now, *something* has to represent the actual filesystem
> hierarchy, so that dentries/inodes can be created on a lookup
> miss based on that. So, what do you do here ? kobject and
> its associates already represent most of the information necessary
> for a backing store. 

I understand what you're trying to do, and I say it's the wrong approach. 
You're overloading kobjects in a manner unintended, and in a way that is 
not welcome. I do not have an alternative solution, but my last email gave 
some hints of where to look. Don't get bitter because I disagree. 

> > You can also use the assumption that an attribute group exists for all the 
> > kobjects in a kset, and that a kobject knows what kset it belongs to. And
> > that eventually, all attributes should be added as part of an attribute 
> > group..
> 
> As I said before, no matter how much you save on kobjects and attrs,
> I can't see how you can account for ageing of dentries and inodes.
> Please look at it from the VFS angle and see if there is a better
> way to represent kobjects/attrs in order to create dentries/inodes
> on demand and age later.

That's what I told you, only reversed - try again. The patch posted in 
unacceptable, though I'm willing to look at alternatives. I don't have or 
see a problem with the current situation, so your arguments are going to 
have to be a bit stronger. 

Thanks,


	Pat



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 19:30     ` viro
@ 2003-10-06 20:01       ` Dipankar Sarma
  2003-10-06 20:34         ` viro
  2003-10-07  4:47       ` Maneesh Soni
  1 sibling, 1 reply; 34+ messages in thread
From: Dipankar Sarma @ 2003-10-06 20:01 UTC (permalink / raw)
  To: viro; +Cc: Patrick Mochel, Maneesh Soni, Greg KH, LKML

On Mon, Oct 06, 2003 at 08:30:50PM +0100, viro@parcelfarce.linux.theplanet.co.uk wrote:
> On Tue, Oct 07, 2003 at 12:57:13AM +0530, Dipankar Sarma wrote:
> > sysfs currently uses dentries to represent filesystem hierarchy.
> > We want to create the dentries on the fly and age them out.
> > So, we can no longer use dentries to represent filesystem hierarchy.
> > Now, *something* has to represent the actual filesystem
> > hierarchy, so that dentries/inodes can be created on a lookup
> > miss based on that. So, what do you do here ? kobject and
> > its associates already represent most of the information necessary
> > for a backing store. Maneesh just added a little more to complete
> > what is equivalent of a on-disk filesystem. This allows vfs to
> > create dentries and inodes on the fly and age later on. Granted
> > that there are probably ugliness in the design to be sorted out
> > and it may have taken kobjects in a slightly different direction
> > than earlier, but it is not that odd when you look at it
> > from the VFS point of view.
> 
> Rot.  First of all, *not* *all* *kobjects* *are* *in* *sysfs*.  And these
> are pure loss in your case.

gregkh pointed out this as well and that is why I said that Maneesh's
patch may have taken kobjects in a different direction than what
it was intended earlier. I don't disagree with this and it may
very well be that dentry ageing will have to be done differently.

> 
> What's more important, for leaves of the sysfs tree your overhead is also
> a loss - we don't need to pin dentry down for them even with current sysfs
> design.   And that can be done with minimal code changes and no data changes
> at all.  Your patch will have to be more attractive than that.  What's the
> expected ratio of directories to non-directories in sysfs?

ISTR, a large number of files in sysfs are attributes which are leaves.
So, keeping a kobject tree partially connected using dentries as backing 
store as opposed to having everything connected might just be enough.
It will be looked into.

Thanks
Dipankar

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 19:33     ` Patrick Mochel
@ 2003-10-06 20:26       ` Dipankar Sarma
  2003-10-06 20:29         ` Patrick Mochel
  0 siblings, 1 reply; 34+ messages in thread
From: Dipankar Sarma @ 2003-10-06 20:26 UTC (permalink / raw)
  To: Patrick Mochel; +Cc: Maneesh Soni, Al Viro, Greg KH, LKML

On Mon, Oct 06, 2003 at 12:33:19PM -0700, Patrick Mochel wrote:
> It's not a realistic requirement for me to solve your customer problems. 
> :) I've been involved in this argument before, and the arguments have been 
> the same, pretty much along party lines of IBM vs. Everyone else. I'm not 
> here to point fingers, but you must heed the fact that we've been here 
> before. 

Well, I didn't mention the c-word, Pat, you did :-) I would much
rather help figure out the best possible way to implement dentry/inode
ageing in sysfs.

> > Besides that think about the added complexity of lookups due to
> > all those pinned dentries forever residing in dentry hash table.
> 
> Well, along with more memory and more devices, I would expect your 
> customers to also be paying for the fastest processors. :) 

Again, more than customers, it is a question of DTRT.

> > sysfs currently uses dentries to represent filesystem hierarchy.
> > We want to create the dentries on the fly and age them out.
> > So, we can no longer use dentries to represent filesystem hierarchy.
> > Now, *something* has to represent the actual filesystem
> > hierarchy, so that dentries/inodes can be created on a lookup
> > miss based on that. So, what do you do here ? kobject and
> > its associates already represent most of the information necessary
> > for a backing store. 
> 
> I understand what you're trying to do, and I say it's the wrong approach. 
> You're overloading kobjects in a manner unintended, and in a way that is 
> not welcome. I do not have an alternative solution, but my last email gave 
> some hints of where to look. Don't get bitter because I disagree. 

The overloading kobject argument is much better. Gregkh has also
indicated that non-sysfs kobjects will increase. That definitely
puts things in a different perspective. Fair enough.

> > > You can also use the assumption that an attribute group exists for all the 
> > > kobjects in a kset, and that a kobject knows what kset it belongs to. And
> > > that eventually, all attributes should be added as part of an attribute 
> > > group..
> > 
> > As I said before, no matter how much you save on kobjects and attrs,
> > I can't see how you can account for ageing of dentries and inodes.
> > Please look at it from the VFS angle and see if there is a better
> > way to represent kobjects/attrs in order to create dentries/inodes
> > on demand and age later.
> 
> That's what I told you, only reversed - try again. The patch posted in 
> unacceptable, though I'm willing to look at alternatives. I don't have or 

Viro's suggestion of pinning the non-leaf dentries only seems like
a very good first alternative to try out.

> see a problem with the current situation, so your arguments are going to 
> have to be a bit stronger. 

By not pinning dentries, you save several hundreds of KBs of lowmem
in a common case low-end system with six disks, much reduced number of dentries
in the hash table and huge savings in large systems. I would hope that
is a good argument. Granted you don't like Maneesh's patch as it is now,
but those things will change as more feedbacks come in.

Thanks
Dipankar

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 20:26       ` Dipankar Sarma
@ 2003-10-06 20:29         ` Patrick Mochel
  2003-10-07  4:31           ` Maneesh Soni
  0 siblings, 1 reply; 34+ messages in thread
From: Patrick Mochel @ 2003-10-06 20:29 UTC (permalink / raw)
  To: Dipankar Sarma; +Cc: Maneesh Soni, Al Viro, Greg KH, LKML


> > That's what I told you, only reversed - try again. The patch posted in 
> > unacceptable, though I'm willing to look at alternatives. I don't have or 
> 
> Viro's suggestion of pinning the non-leaf dentries only seems like
> a very good first alternative to try out.

Uh, that's about the same thing I suggested, though probably not as 
concisely: 

"As I said before, I don't know the right solution, but the directions to 
look in are related to attribute groups. Attributes definitely consume the 
most amount of memory (as opposed to the kobject hierachy), so delaying 
their creation would help, hopefully without making the interface too 
awkward. 

You can also use the assumption that an attribute group exists for all the 
kobjects in a kset, and that a kobject knows what kset it belongs to. And
that eventually, all attributes should be added as part of an attribute 
group.."

Attributes are the leaf entries, and they don't need to always exist. But, 
you have easy access to them via the attribute groups of the ksets the 
kobjects belong to. 

> > see a problem with the current situation, so your arguments are going to 
> > have to be a bit stronger. 
> 
> By not pinning dentries, you save several hundreds of KBs of lowmem
> in a common case low-end system with six disks, much reduced number of dentries
> in the hash table and huge savings in large systems. I would hope that
> is a good argument. Granted you don't like Maneesh's patch as it is now,
> but those things will change as more feedbacks come in.

A low-end system has six disks? I don't think so. Maybe a low-end server,
but of the dozen or so computers that I own, not one has six disks. Call 
me a techno-wimp, but I think your perspective is still a bit skewed. 

I understand your argument, but I still fail to see evidence that it's
really a problem. Perhaps you could characterize it a bit more and
convince us that sysfs overhead is really taking up a significant
percentage of the total overhead while running a set of common
applications with lots of open files (which should also be putting 
pressure on the same caches..) 

Thanks,


	Pat


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 20:01       ` Dipankar Sarma
@ 2003-10-06 20:34         ` viro
  0 siblings, 0 replies; 34+ messages in thread
From: viro @ 2003-10-06 20:34 UTC (permalink / raw)
  To: Dipankar Sarma; +Cc: Patrick Mochel, Maneesh Soni, Greg KH, LKML

On Tue, Oct 07, 2003 at 01:31:10AM +0530, Dipankar Sarma wrote:
> > What's more important, for leaves of the sysfs tree your overhead is also
> > a loss - we don't need to pin dentry down for them even with current sysfs
> > design.   And that can be done with minimal code changes and no data changes
> > at all.  Your patch will have to be more attractive than that.  What's the
> > expected ratio of directories to non-directories in sysfs?
> 
> ISTR, a large number of files in sysfs are attributes which are leaves.
> So, keeping a kobject tree partially connected using dentries as backing 
> store as opposed to having everything connected might just be enough.
> It will be looked into.

Note that main reason why sysfs uses ramfs model is that it gets good
interaction with VFS locking for free - it just uses ->i_sem of associated
inodes for tree protection and that gives us all we need.  Very nice,
but it means that we need these associated inodes.  And since operations
are done deep in tree, we don't want to walk all the way from root, bringing
them in-core.

However, having them for all nodes is an overkill - if we keep them only
for non-leaves, we get all the benefits of ramfs approach with less overhead.
Indeed, even if argument of sysfs operation is a leaf node (and I'm not sure
that we actually have such beasts), we can always take the parent node and
be done with that.

All we need is
	a) ->lookup() that would look for an attribute (all directories are
in cache, so if there's no attribute with such name and ->lookup() had been
called, we'd need to return negative anyway).
	b) sysfs code slightly modified in several places - mostly,
sysfs_get_dentry() callers.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 19:10             ` Greg KH
@ 2003-10-07  0:15               ` Pascal Schmidt
  0 siblings, 0 replies; 34+ messages in thread
From: Pascal Schmidt @ 2003-10-07  0:15 UTC (permalink / raw)
  To: Greg KH; +Cc: linux-kernel

On Mon, 6 Oct 2003, Greg KH wrote:

> Systems like this are not uncommon, I agree.  But also for systems like
> this, the current code works just fine (small number of fixed devices.)
> I haven't heard anyone complain about memory usage for a normal system
> (99.9% of the systems out there.)

I'd like my kernel to have as small a footprint as possible. Allocated
memory that is almost never used is waste. It may not be much, but 
"add little to little and you will have a big pile". Whatever, we're
not the big pile yet and I'm not concerned enough to cook up patches.

> Also,  remember that in 2.7 I'm going to make device numbers random so
> you will have to use something like udev to control your /dev tree.
> Slowly weaning yourself off of a static /dev during the next 2 years or
> so might be a good idea :)

I guess by then we'll have an excellent udev version with no known
bugs. ;) However, requiring more and more packages to be installed just
to boot a system is also not something I like much.

-- 
Ciao,
Pascal


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 20:29         ` Patrick Mochel
@ 2003-10-07  4:31           ` Maneesh Soni
  2003-10-07  5:25             ` Nick Piggin
  0 siblings, 1 reply; 34+ messages in thread
From: Maneesh Soni @ 2003-10-07  4:31 UTC (permalink / raw)
  To: Patrick Mochel; +Cc: Dipankar Sarma, Al Viro, Greg KH, LKML

On Mon, Oct 06, 2003 at 01:29:20PM -0700, Patrick Mochel wrote:
> 
> Uh, that's about the same thing I suggested, though probably not as 
> concisely: 
> 
> "As I said before, I don't know the right solution, but the directions to 
> look in are related to attribute groups. Attributes definitely consume the 
> most amount of memory (as opposed to the kobject hierachy), so delaying 
> their creation would help, hopefully without making the interface too 
> awkward. 

Ok.. attributes do consume maximum in sysfs. In the system I mentioned
leaf dentries are about 65% of the total.

> You can also use the assumption that an attribute group exists for all the 
> kobjects in a kset, and that a kobject knows what kset it belongs to. And

That's not correct... kobject corresponding to /sys/block/hda/queue 
doesnot know which kset it belongs to and what are its attributes. Same
for /sys/block/hda/queue/iosched.

> that eventually, all attributes should be added as part of an attribute 
> group.."
> 
> Attributes are the leaf entries, and they don't need to always exist. But, 
> you have easy access to them via the attribute groups of the ksets the 
> kobjects belong to. 
> 

Having backing store just for leaf dentries should be fine. But there is 
_no_ easy access for attributes. For this also I see some data change required 
as of now. The reasons are 
 - not all kobjects belong to a kset. For example, /sys/block/hda/queue
 - not all ksets have attribute groups
  
I don't see any generic rule for finding attributes or attribute group
of a kobject. Such random-ness forced me to add new fields to kobject. The
sysfs picture doesnot show the kset-kobject relationship. For example
kobject corresponding /sys/devices/system does not belong to devices_subsystem.
and it is not in the devices_subsys->list. There was no other way except to
build new hierarchy info in the kobject. 

What are people's opinion about the way I have linked attributes and
attributes_group to the kobject. I could not link "struct attribute" and
"struct attriubte_group" directly to kobject because these are generally 
statically alocated and many kobjects will have the same attribute structure.
and are asigned to multiple kobjects 

Thanks
Maneesh
-- 
Maneesh Soni
Linux Technology Center, 
IBM Software Lab, Bangalore, India
email: maneesh@in.ibm.com
Phone: 91-80-5044999 Fax: 91-80-5268553
T/L : 9243696

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 19:30     ` viro
  2003-10-06 20:01       ` Dipankar Sarma
@ 2003-10-07  4:47       ` Maneesh Soni
  1 sibling, 0 replies; 34+ messages in thread
From: Maneesh Soni @ 2003-10-07  4:47 UTC (permalink / raw)
  To: viro; +Cc: Dipankar Sarma, Patrick Mochel, Greg KH, LKML

On Mon, Oct 06, 2003 at 08:30:50PM +0100, viro@parcelfarce.linux.theplanet.co.uk wrote:
> What's more important, for leaves of the sysfs tree your overhead is also
> a loss - we don't need to pin dentry down for them even with current sysfs
> design.   And that can be done with minimal code changes and no data changes
> at all.  Your patch will have to be more attractive than that.  What's the
> expected ratio of directories to non-directories in sysfs?

Current sysfs / kobject design _require_ that dentries for the leaves to be
present all the times. There is simply no generic way to find attributes
of a kobject. As of now it uses dentry->d_fsdata to reach to the attribute.

In my system leaves are around 65% of the total.

Thanks
Maneesh

-- 
Maneesh Soni
Linux Technology Center, 
IBM Software Lab, Bangalore, India
email: maneesh@in.ibm.com
Phone: 91-80-5044999 Fax: 91-80-5268553
T/L : 9243696

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-07  4:31           ` Maneesh Soni
@ 2003-10-07  5:25             ` Nick Piggin
  2003-10-07  7:17               ` Maneesh Soni
  0 siblings, 1 reply; 34+ messages in thread
From: Nick Piggin @ 2003-10-07  5:25 UTC (permalink / raw)
  To: maneesh; +Cc: Patrick Mochel, Dipankar Sarma, Al Viro, Greg KH, LKML



Maneesh Soni wrote:

>On Mon, Oct 06, 2003 at 01:29:20PM -0700, Patrick Mochel wrote:
>
>>Uh, that's about the same thing I suggested, though probably not as 
>>concisely: 
>>
>>"As I said before, I don't know the right solution, but the directions to 
>>look in are related to attribute groups. Attributes definitely consume the 
>>most amount of memory (as opposed to the kobject hierachy), so delaying 
>>their creation would help, hopefully without making the interface too 
>>awkward. 
>>
>
>Ok.. attributes do consume maximum in sysfs. In the system I mentioned
>leaf dentries are about 65% of the total.
>
>
>>You can also use the assumption that an attribute group exists for all the 
>>kobjects in a kset, and that a kobject knows what kset it belongs to. And
>>
>
>That's not correct... kobject corresponding to /sys/block/hda/queue 
>doesnot know which kset it belongs to and what are its attributes. Same
>for /sys/block/hda/queue/iosched.
>
>
>>that eventually, all attributes should be added as part of an attribute 
>>group.."
>>
>>Attributes are the leaf entries, and they don't need to always exist. But, 
>>you have easy access to them via the attribute groups of the ksets the 
>>kobjects belong to. 
>>
>>
>
>Having backing store just for leaf dentries should be fine. But there is 
>_no_ easy access for attributes. For this also I see some data change required 
>as of now. The reasons are 
> - not all kobjects belong to a kset. For example, /sys/block/hda/queue
> - not all ksets have attribute groups
>  
>

queue and iosched might not be good examples as they are somewhat broken
wrt the block device scheme. Possibly they will be put in their own kset,
with /sys/block/hda/queue symlinked to them.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-07  5:25             ` Nick Piggin
@ 2003-10-07  7:17               ` Maneesh Soni
  0 siblings, 0 replies; 34+ messages in thread
From: Maneesh Soni @ 2003-10-07  7:17 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Patrick Mochel, Dipankar Sarma, Al Viro, Greg KH, LKML

On Tue, Oct 07, 2003 at 03:25:58PM +1000, Nick Piggin wrote:
> 
> 
> >
> >Having backing store just for leaf dentries should be fine. But there is 
> >_no_ easy access for attributes. For this also I see some data change 
> >required as of now. The reasons are 
> >- not all kobjects belong to a kset. For example, /sys/block/hda/queue
> >- not all ksets have attribute groups
> > 
> >
> 
> queue and iosched might not be good examples as they are somewhat broken
> wrt the block device scheme. Possibly they will be put in their own kset,
> with /sys/block/hda/queue symlinked to them.
> 

Well here is more crap then...

kobjects corresponding to /sys/class/tty/* and /sys/class/net/* have the
same kset (i.e class_obj) but totally different attributes. There is no
way to find the attributes given a kobject belonging to lets say 
/sys/class/net/eth0 except through the hierarchy maintained in pinned sysfs 
dentries.


-- 
Maneesh Soni
Linux Technology Center, 
IBM Software Lab, Bangalore, India
email: maneesh@in.ibm.com
Phone: 91-80-5044999 Fax: 91-80-5268553
T/L : 9243696

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 18:30         ` Greg KH
  2003-10-06 18:38           ` Kevin P. Fleming
@ 2003-10-07  8:30           ` Maneesh Soni
  1 sibling, 0 replies; 34+ messages in thread
From: Maneesh Soni @ 2003-10-07  8:30 UTC (permalink / raw)
  To: Greg KH
  Cc: Kevin P. Fleming, Christian Borntraeger, Al Viro, Patrick Mochel,
	LKML, Dipankar Sarma

On Mon, Oct 06, 2003 at 06:42:29PM +0000, Greg KH wrote:
> On Mon, Oct 06, 2003 at 11:23:53AM -0700, Kevin P. Fleming wrote:
> > Greg KH wrote:
> > 
> > >The hotplug event points to the sysfs location of the kobject, that's
> > >all.  libsysfs then takes that kobject location and sucks up all of the
> > >attribute information for that kobject, which udev then uses to
> > >determine what it should do.
> > 
> > This sounds like a very different issue than what I thought you said 
> > originally. Your other message said a "find over the sysfs tree", 
> > implying some sort of tree-wide search for relevant information. In 
> > fact, the "find" is only for attributes in the directory owned by the 
> > kobject, right? Once they have been "found", they will age out of the 
> > dentry/inode cache just like any other search results.
> 
> They might, depending on the patch implementation.  And no, the issue
> isn't different, as we have to show the memory usage after all kobjects
> are accessed in sysfs from userspace, not just before, like some of the
> measurements are, in order to try to compare apples to apples.
> 

Well Greg, the aim of the patch is to save memory when the kobject is not
in use. I don't think it is a good idea to buy the same thing in  
600 bytes of RAM which is available for just 100 bytes.

I trying one more version which should not put any or minimum load on kobject 
when it is not in sysfs.

-- 
Maneesh Soni
Linux Technology Center, 
IBM Software Lab, Bangalore, India
email: maneesh@in.ibm.com
Phone: 91-80-5044999 Fax: 91-80-5268553
T/L : 9243696

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC 0/6] Backing Store for sysfs
  2003-10-06 18:34             ` Greg KH
@ 2003-10-07  9:08               ` Andreas Jellinghaus
  0 siblings, 0 replies; 34+ messages in thread
From: Andreas Jellinghaus @ 2003-10-07  9:08 UTC (permalink / raw)
  To: linux-kernel

> No, not a 'find', we look up the kobject that was added, and its
> attributes.  Doing a 'find' will emulate this for your tests, that's
> all.

But coldplugging will more or less do a "find /sys" to get a list of
all existing devices and add those to /dev. So expect a "find /sys" 
to be run at least once in the early boot process. Coldplugging is
not yet implemented, but it's possible to simulate it with a bit
of shell scripting.

Andreas


^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2003-10-07  9:08 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-10-06 17:38 [RFC 0/6] Backing Store for sysfs Christian Borntraeger
2003-10-06 17:41 ` Greg KH
2003-10-06 18:00   ` Kevin P. Fleming
2003-10-06 18:11     ` Greg KH
2003-10-06 18:23       ` Kevin P. Fleming
2003-10-06 18:30         ` Greg KH
2003-10-06 18:38           ` Kevin P. Fleming
2003-10-07  8:30           ` Maneesh Soni
     [not found] <Dzxw.1wW.3@gated-at.bofh.it>
     [not found] ` <DGfG.4UY.3@gated-at.bofh.it>
     [not found]   ` <DHv1.5Ir.1@gated-at.bofh.it>
     [not found]     ` <DHEU.7ET.19@gated-at.bofh.it>
     [not found]       ` <DHY6.3c0.7@gated-at.bofh.it>
     [not found]         ` <DI7S.58w.13@gated-at.bofh.it>
2003-10-06 19:01           ` Pascal Schmidt
2003-10-06 19:10             ` Greg KH
2003-10-07  0:15               ` Pascal Schmidt
  -- strict thread matches above, loose matches on Subject: below --
2003-10-06 18:19 Christian Borntraeger
2003-10-06 12:34 Christian Borntraeger
2003-10-06  8:59 Maneesh Soni
2003-10-06 16:08 ` Greg KH
2003-10-06 17:31   ` Dipankar Sarma
2003-10-06 17:38     ` Greg KH
2003-10-06 18:01       ` Dipankar Sarma
2003-10-06 18:09         ` Greg KH
2003-10-06 18:31           ` Dipankar Sarma
2003-10-06 18:34             ` Greg KH
2003-10-07  9:08               ` Andreas Jellinghaus
2003-10-06 18:44 ` Patrick Mochel
2003-10-06 19:27   ` Dipankar Sarma
2003-10-06 19:30     ` viro
2003-10-06 20:01       ` Dipankar Sarma
2003-10-06 20:34         ` viro
2003-10-07  4:47       ` Maneesh Soni
2003-10-06 19:33     ` Patrick Mochel
2003-10-06 20:26       ` Dipankar Sarma
2003-10-06 20:29         ` Patrick Mochel
2003-10-07  4:31           ` Maneesh Soni
2003-10-07  5:25             ` Nick Piggin
2003-10-07  7:17               ` Maneesh Soni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox