public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Memory leak in 2.4.33-pre1?
@ 2006-02-13 21:46 Yoss
  2006-02-14  0:05 ` Willy Tarreau
  0 siblings, 1 reply; 9+ messages in thread
From: Yoss @ 2006-02-13 21:46 UTC (permalink / raw)
  To: linux-kernel

Hello.
My server have lost about ~300MB of RAM. Details:

webcache:~# uname -a
Linux webcache 2.4.33-pre1 #2 Wed Feb 8 18:26:44 CET 2006 i686 GNU/Linux

No other patches. Only basic elements needed to work in network
enviroment.
	
webcache:~# dmesg | grep MEM
127MB HIGHMEM available.
896MB LOWMEM available.

When I count memmory used by processes (column SIZE from ps or RES from
top) there is about 650MB used. (There is only squid, named and basic
system deamons running on that host). But:

webcache:~# free -m
            total       used       free     shared  buffers    cached
Mem:         1009        996         13          0       18        32
-/+ buffers/cache:       945         64
Swap:        1953          0       1952


So there is missing about ~300MB.
If anyone wants to have more detailed info feel free to ask.

-- 
Bartłomiej Butyn aka Yoss
Nie ma tego złego co by na gorsze nie wyszło.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Memory leak in 2.4.33-pre1?
  2006-02-13 21:46 Memory leak in 2.4.33-pre1? Yoss
@ 2006-02-14  0:05 ` Willy Tarreau
  2006-02-14  0:34   ` Roberto Nibali
  2006-02-14  8:21   ` Yoss
  0 siblings, 2 replies; 9+ messages in thread
From: Willy Tarreau @ 2006-02-14  0:05 UTC (permalink / raw)
  To: Yoss; +Cc: linux-kernel

Hello,

On Mon, Feb 13, 2006 at 10:46:51PM +0100, Yoss wrote:
> Hello.
> My server have lost about ~300MB of RAM. Details:
> 
> webcache:~# uname -a
> Linux webcache 2.4.33-pre1 #2 Wed Feb 8 18:26:44 CET 2006 i686 GNU/Linux
> 
> No other patches. Only basic elements needed to work in network
> enviroment.
> 	
> webcache:~# dmesg | grep MEM
> 127MB HIGHMEM available.
> 896MB LOWMEM available.
> 
> When I count memmory used by processes (column SIZE from ps or RES from
> top) there is about 650MB used. (There is only squid, named and basic
> system deamons running on that host). But:
> 
> webcache:~# free -m
>             total       used       free     shared  buffers    cached
> Mem:         1009        996         13          0       18        32
> -/+ buffers/cache:       945         64
> Swap:        1953          0       1952
> 
> 
> So there is missing about ~300MB.
> If anyone wants to have more detailed info feel free to ask.

You don't have to worry. Simply check /proc/slabinfo, you'll see plenty
of memory used by dentry_cache and inode_cache and that's expected. This
memory will be reclaimed when needed (for instance by calls to malloc()).

If you don't believe me, simply allocate 1 GB in a process, then free it.

Regards,
Willy


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Memory leak in 2.4.33-pre1?
  2006-02-14  0:05 ` Willy Tarreau
@ 2006-02-14  0:34   ` Roberto Nibali
  2006-02-14  4:50     ` Willy Tarreau
  2006-02-14  8:22     ` Yoss
  2006-02-14  8:21   ` Yoss
  1 sibling, 2 replies; 9+ messages in thread
From: Roberto Nibali @ 2006-02-14  0:34 UTC (permalink / raw)
  To: Willy Tarreau; +Cc: Yoss, linux-kernel

>>So there is missing about ~300MB.
>>If anyone wants to have more detailed info feel free to ask.
>  
> You don't have to worry. Simply check /proc/slabinfo, you'll see plenty
> of memory used by dentry_cache and inode_cache and that's expected. This

Well, 300M dentry and inode is quite a lot for a system that has been up 
at most for 6 days.

> memory will be reclaimed when needed (for instance by calls to malloc()).

slabtop -s c -o | head -20

would be interesting to see, otherwise I agree with Willy, as always ;).

Cheers,
Roberto Nibali, ratz
-- 
echo 
'[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Memory leak in 2.4.33-pre1?
  2006-02-14  0:34   ` Roberto Nibali
@ 2006-02-14  4:50     ` Willy Tarreau
  2006-02-14  8:22     ` Yoss
  1 sibling, 0 replies; 9+ messages in thread
From: Willy Tarreau @ 2006-02-14  4:50 UTC (permalink / raw)
  To: Roberto Nibali; +Cc: Yoss, linux-kernel

On Tue, Feb 14, 2006 at 01:34:31AM +0100, Roberto Nibali wrote:
> >>So there is missing about ~300MB.
> >>If anyone wants to have more detailed info feel free to ask.
> > 
> >You don't have to worry. Simply check /proc/slabinfo, you'll see plenty
> >of memory used by dentry_cache and inode_cache and that's expected. This
> 
> Well, 300M dentry and inode is quite a lot for a system that has been up 
> at most for 6 days.

my notebood eats more than the double of this when doing its slocate
3 min after boot, so that depends on what it does ;-)

> >memory will be reclaimed when needed (for instance by calls to malloc()).
> 
> slabtop -s c -o | head -20
> 
> would be interesting to see, otherwise I agree with Willy, as always ;).

:-)

> Cheers,
> Roberto Nibali, ratz

cheers,
Willy


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Memory leak in 2.4.33-pre1?
  2006-02-14  0:05 ` Willy Tarreau
  2006-02-14  0:34   ` Roberto Nibali
@ 2006-02-14  8:21   ` Yoss
  2006-02-14 21:43     ` Willy TARREAU
  1 sibling, 1 reply; 9+ messages in thread
From: Yoss @ 2006-02-14  8:21 UTC (permalink / raw)
  To: Willy Tarreau; +Cc: linux-kernel

On Tue, Feb 14, 2006 at 01:05:30AM +0100, Willy Tarreau wrote:
> You don't have to worry. Simply check /proc/slabinfo, you'll see plenty
> of memory used by dentry_cache and inode_cache and that's expected. This
> memory will be reclaimed when needed (for instance by calls to malloc()).

I downgraded hernel to 2.4.33 last night. So there is no slabinfo from
that problem now. But thank you for reply. Why is this memory not 
showed somewhere in top or free?

> If you don't believe me, simply allocate 1 GB in a process, then free it.

If that what you said is rigth, day after tomorow I'll have the same
situation - only thing I have changed is kernel. So we'll see. :)

-- 
Bartłomiej Butyn aka Yoss
Nie ma tego złego co by na gorsze nie wyszło.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Memory leak in 2.4.33-pre1?
  2006-02-14  0:34   ` Roberto Nibali
  2006-02-14  4:50     ` Willy Tarreau
@ 2006-02-14  8:22     ` Yoss
  1 sibling, 0 replies; 9+ messages in thread
From: Yoss @ 2006-02-14  8:22 UTC (permalink / raw)
  To: Roberto Nibali; +Cc: linux-kernel

On Tue, Feb 14, 2006 at 01:34:31AM +0100, Roberto Nibali wrote:
> >You don't have to worry. Simply check /proc/slabinfo, you'll see plenty
> >of memory used by dentry_cache and inode_cache and that's expected. This
> 
> Well, 300M dentry and inode is quite a lot for a system that has been up 
> at most for 6 days.

It has 3 days uptime. But I wrote in prevoius mail, I rebooted it last
night to downgrade kernel.

-- 
Bartłomiej Butyn aka Yoss
Nie ma tego złego co by na gorsze nie wyszło.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Memory leak in 2.4.33-pre1?
  2006-02-14  8:21   ` Yoss
@ 2006-02-14 21:43     ` Willy TARREAU
  2006-02-15  9:01       ` Yoss
  0 siblings, 1 reply; 9+ messages in thread
From: Willy TARREAU @ 2006-02-14 21:43 UTC (permalink / raw)
  To: Yoss; +Cc: linux-kernel, Roberto Nibali

On Tue, Feb 14, 2006 at 09:21:36AM +0100, Yoss wrote:
> On Tue, Feb 14, 2006 at 01:05:30AM +0100, Willy Tarreau wrote:
> > You don't have to worry. Simply check /proc/slabinfo, you'll see plenty
> > of memory used by dentry_cache and inode_cache and that's expected. This
> > memory will be reclaimed when needed (for instance by calls to malloc()).
> 
> I downgraded hernel to 2.4.33 last night.

I presume you mean 2.4.32 here.

> So there is no slabinfo from that problem now. But thank you for reply.
> Why is this memory not showed somewhere in top or free?

I don't know, it's some gray area for me too, it's just that I'm used to
this behaviour. I even have a program that I run to free it when I want
to prioritize disk cache usage over dentry cache (appended to this mail).

For instance, I've booted my notebook to work on it. 3 minutes after the boot
has completed, slocate starts indexing files (I wait 3 minutes to speed up
services start). Now that slocate has ended its job, here is what a few tools
have to say about my memory usage :

$ free        
             total       used       free     shared    buffers     cached
Mem:        514532     509576       4956          0      25836       5808
-/+ buffers/cache:     477932      36600
Swap:       312316       1436     310880

It even managed to swap !

$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 0  0   1436   4908  25860   5824    0    1  1213    39  536   586  2  7 90  0
 0  0   1436   4904  25860   5824    0    0     0     0  254    16  0  0 100  0

$ top
top - 22:32:47 up 12 min,  1 user,  load average: 0.12, 0.52, 0.34
Tasks:  44 total,   1 running,  43 sleeping,   0 stopped,   0 zombie
Cpu(s):   0.1% user,   7.2% system,   2.2% nice,  90.5% idle
Mem:    514532k total,   510076k used,     4456k free,    25900k buffers
Swap:   312316k total,     1436k used,   310880k free,     6112k cached

$ cat /proc/meminfo
        total:    used:    free:  shared: buffers:  cached:
Mem:  526880768 522194944  4685824        0 26521600  6946816
Swap: 319811584  1470464 318341120
MemTotal:       514532 kB
MemFree:          4576 kB
MemShared:           0 kB
Buffers:         25900 kB
Cached:           6132 kB
SwapCached:        652 kB
Active:          20052 kB
Inactive:        12664 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:       514532 kB
LowFree:          4576 kB
SwapTotal:      312316 kB
SwapFree:       310880 kB

$ cat /proc/slabinfo 
slabinfo - version: 1.1
kmem_cache            81    108    108    3    3    1
urb_priv               0      0     64    0    0    1
ip_fib_hash           10    113     32    1    1    1
clip_arp_cache         0      0    128    0    0    1
mpr                    0      0     16    0    0    1
label                  0      0     32    0    0    1
tcp_tw_bucket          0      0     96    0    0    1
tcp_bind_bucket        9    113     32    1    1    1
tcp_open_request       0      0     64    0    0    1
inet_peer_cache        0      0     64    0    0    1
ip_dst_cache           2     20    192    1    1    1
arp_cache              1     40     96    1    1    1
ip_arpf_cache          0      0    128    0    0    1
blkdev_requests     5120   5120     96  128  128    1
xfs_chashlist          0      0     20    0    0    1
xfs_ili                0      0    140    0    0    1
xfs_ifork              0      0     56    0    0    1
xfs_efi_item           0      0    260    0    0    1
xfs_efd_item           0      0    260    0    0    1
xfs_buf_item           0      0    148    0    0    1
xfs_dabuf              0      0     16    0    0    1
xfs_da_state           0      0    336    0    0    1
xfs_trans              0      0    592    0    0    2
xfs_inode              0      0    376    0    0    1
xfs_btree_cur          0      0    132    0    0    1
xfs_bmap_free_item     0      0     12    0    0    1
xfs_buf_t              0      0    192    0    0    1
linvfs_icache          0      0    320    0    0    1
ext2_xattr             0      0     44    0    0    1
journal_head           0      0     48    0    0    1
revoke_table           0      0     12    0    0    1
revoke_record          0      0     32    0    0    1
ext3_xattr             0      0     44    0    0    1
eventpoll pwq          0      0     36    0    0    1
eventpoll epi          0      0     96    0    0    1
dnotify_cache          0      0     20    0    0    1
file_lock_cache        3     42     92    1    1    1
fasync_cache           0      0     16    0    0    1
uid_cache              6    113     32    1    1    1
skbuff_head_cache    207    220    192   11   11    1
sock                  34     36    896    9    9    1
sigqueue               0     29    132    0    1    1
kiobuf                 0      0     64    0    0    1
cdev_cache            60    177     64    3    3    1
bdev_cache             8     59     64    1    1    1
mnt_cache             16     59     64    1    1    1
inode_cache       751083 751096    480 93886 93887    1
dentry_cache      602570 607530    128 20251 20251    1
filp                 334    360    128   12   12    1
names_cache            0      2   4096    0    2    1
buffer_head         7084   9440     96  234  236    1
mm_struct             32     48    160    2    2    1
vm_area_struct      2362   2600     96   62   65    1
fs_cache              31    113     32    1    1    1
files_cache           32     36    416    4    4    1
signal_act            40     42   1312   14   14    1
size-131072(DMA)       0      0 131072    0    0   32
size-131072            0      0 131072    0    0   32
size-65536(DMA)        0      0  65536    0    0   16
size-65536             0      0  65536    0    0   16
size-32768(DMA)        0      0  32768    0    0    8
size-32768             8      8  32768    8    8    8
size-16384(DMA)        0      0  16384    0    0    4
size-16384             0      0  16384    0    0    4
size-8192(DMA)         0      0   8192    0    0    2
size-8192              4      4   8192    4    4    2
size-4096(DMA)         0      0   4096    0    0    1
size-4096             69     69   4096   69   69    1
size-2048(DMA)         0      0   2048    0    0    1
size-2048            213    216   2048  107  108    1
size-1024(DMA)         0      0   1024    0    0    1
size-1024             51     52   1024   13   13    1
size-512(DMA)          0      0    512    0    0    1
size-512             183    184    512   23   23    1
size-256(DMA)          0      0    256    0    0    1
size-256              51     60    256    4    4    1
size-128(DMA)          3     30    128    1    1    1
size-128            1574   1620    128   54   54    1
size-64(DMA)           0      0     64    0    0    1
size-64            38450  38468     64  652  652    1
size-32(DMA)          51    113     32    1    1    1
size-32            76440  78309     32  693  693    1

Have you noticed dentry_cache and inode_cache above ?
Now I will allocate (and touch) 500000 kB of ram (OK it speaks french) :

$ ~/dev/freemem 500000
Liberation de 500000 ko de ram en cours
500000 k alloues (blocs de 1k) 
Memoire liberee.

$ free
             total       used       free     shared    buffers     cached
Mem:        514532      22368     492164          0       1324       4276
-/+ buffers/cache:      16768     497764
Swap:       312316       4252     308064

$ grep cache /proc/slabinfo 
kmem_cache            81    108    108    3    3    1
clip_arp_cache         0      0    128    0    0    1
inet_peer_cache        0      0     64    0    0    1
ip_dst_cache           2     20    192    1    1    1
arp_cache              1     40     96    1    1    1
ip_arpf_cache          0      0    128    0    0    1
linvfs_icache          0      0    320    0    0    1
dnotify_cache          0      0     20    0    0    1
file_lock_cache        3     42     92    1    1    1
fasync_cache           0      0     16    0    0    1
uid_cache              6    113     32    1    1    1
skbuff_head_cache    208    220    192   11   11    1
cdev_cache            17    177     64    3    3    1
bdev_cache             8     59     64    1    1    1
mnt_cache             16     59     64    1    1    1
inode_cache          321    768    480   96   96    1
dentry_cache         338   1740    128   58   58    1
names_cache            0      2   4096    0    2    1
fs_cache              31    113     32    1    1    1
files_cache           32     36    416    4    4    1

Have you noticed the difference ? So the memory is not wasted at all. It's
just reported as 'used'.

> > If you don't believe me, simply allocate 1 GB in a process, then free it.
> 
> If that what you said is rigth, day after tomorow I'll have the same
> situation - only thing I have changed is kernel. So we'll see. :)

If you encounter it, simply run the tool below with a size in kB. Warning!
a wrong parameter associated with improper ulimit will hang your system !
Ask it to allocate what you *know* you can free (eg: the swapfree space).

> -- 
> Bart?omiej Butyn aka Yoss
> Nie ma tego z?ego co by na gorsze nie wysz?o.

Regards,
Willy

--- begin freemem.c ---
/* it's old and ugly but still useful */
#include <stdio.h>

main(int argc, char **argv) {

  unsigned long int i,k=0, max;
  char *p;


  if (argc>1)
      max=atol(argv[1]);
  else
      max=0xffffffff;

  printf("Liberation de %lu ko de ram en cours\n",max);
  /* allocation de blocs de 1 Mo */
  while (((p=(char *)malloc(1048576))!=NULL) && (k+1024<=max)) {
    for (i=0;i<256;p[4096*i++]=0);  /* utilisation du bloc */
    k+=1024;
    fprintf(stderr,"\r%d k alloues (blocs de 1M)",k);
  }
  while (((p=(char *)malloc(65536))!=NULL) && (k+64<=max)) {
    for (i=0;i<16;p[4096*i++]=0);  /* utilisation du bloc */
    k+=64;
    fprintf(stderr,"\r%d k alloues (blocs de 64k)",k);
  }
  while (((p=(char *)malloc(1024))!=NULL) && (k+1<=max)) {
    for (i=0;i<16;p[64*i++]=0);  /* utilisation du bloc */
    k+=1;
    fprintf(stderr,"\r%d k alloues (blocs de 1k) ",k);
  }
  fprintf(stderr,"\nMemoire liberee.\n");
  exit(0);
}
--- end freemem.c ---


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Memory leak in 2.4.33-pre1?
  2006-02-14 21:43     ` Willy TARREAU
@ 2006-02-15  9:01       ` Yoss
  2006-02-15 20:11         ` Willy Tarreau
  0 siblings, 1 reply; 9+ messages in thread
From: Yoss @ 2006-02-15  9:01 UTC (permalink / raw)
  To: Willy TARREAU; +Cc: linux-kernel

On Tue, Feb 14, 2006 at 10:43:49PM +0100, Willy TARREAU wrote:
> On Tue, Feb 14, 2006 at 09:21:36AM +0100, Yoss wrote:
> > I downgraded hernel to 2.4.33 last night.
> I presume you mean 2.4.32 here.

Right. :)

> 
> > So there is no slabinfo from that problem now. But thank you for reply.
> > Why is this memory not showed somewhere in top or free?
> 
> I don't know, it's some gray area for me too, it's just that I'm used to
> this behaviour. I even have a program that I run to free it when I want
> to prioritize disk cache usage over dentry cache (appended to this mail).

I think it is grey for me too ;\
After about 36h of run the summary size of processes is 714MB. Free
says:

webcache:~# free -m
             total       used       free     shared    buffers  cached
Mem:          1009        996         13          0		    50      93
-/+ buffers/cache:        852        157
Swap:         1953          0       1953

So thereis 139MB of difference. But:

#slabtop -s c -o | head -20

Active / Total Objects (% used)    : 793971 / 805664 (98.5%)
 Active / Total Slabs (% used)      : 66499 / 66513 (100.0%)
 Active / Total Caches (% used)     : 36 / 59 (61.0%)
 Active / Total Size (% used)       : 235510.62K / 236644.88K (99.5%)
 Minimum / Average / Maximum Object : 0.02K / 0.29K / 128.00K

OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
362495 362490  99%    0.50K  51785        8     207140K inode_cache
380910 380900  99%    0.12K  12697       32      50788K dentry_cache
44800  35342  78%    0.09K   1120       42        4480K	buffer_head
636    607  95%    2.00K    318        2          1272K size-2048
139    139 100%    4.00K    139        1           556K size-4096
2064   1891  91%    0.16K     86       25          344K ip_dst_cache
232    224  96%    0.91K     58        4           232K	sock
2080   2048  98%    0.09K     52       42	       208K blkdev_requests
5198   4495  86%    0.03K     46      128		   184K size-32
1032    658  63%    0.16K     43       25		   172K skbuff_head_cache
870    864  99%    0.12K     29       32		   116K filp
1416   1381  97%    0.06K     24       64		    96K size-64
570    545  95%    0.12K     19			 32         76K size-128
								 
> Have you noticed the difference ? So the memory is not wasted at all. It's
> just reported as 'used'.

I see. I also noticed that I simply cannot tell what for is this memory
used. Is this better for me to enlarge cache_mem in squid for about
100MB and have less *_cache or is better to have more *_cache? :)

> > > If you don't believe me, simply allocate 1 GB in a process, then free it.
> > If that what you said is rigth, day after tomorow I'll have the same
> > situation - only thing I have changed is kernel. So we'll see. :)
> 
> If you encounter it, simply run the tool below with a size in kB. Warning!
> a wrong parameter associated with improper ulimit will hang your system !
> Ask it to allocate what you *know* you can free (eg: the swapfree space).

I don't matter is this memory used for cache or free. I just want to be
sure that it is not leaking :)

-- 
Bartłomiej Butyn aka Yoss
Nie ma tego złego co by na gorsze nie wyszło.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Memory leak in 2.4.33-pre1?
  2006-02-15  9:01       ` Yoss
@ 2006-02-15 20:11         ` Willy Tarreau
  0 siblings, 0 replies; 9+ messages in thread
From: Willy Tarreau @ 2006-02-15 20:11 UTC (permalink / raw)
  To: Yoss; +Cc: linux-kernel

Hi !
On Wed, Feb 15, 2006 at 10:01:30AM +0100, Yoss wrote:
 								 
> > Have you noticed the difference ? So the memory is not wasted at all. It's
> > just reported as 'used'.
> 
> I see. I also noticed that I simply cannot tell what for is this memory
> used. Is this better for me to enlarge cache_mem in squid for about
> 100MB and have less *_cache or is better to have more *_cache? :)

If squid is the main usage of your server, then I guess it should be better
to reserve some memory for it by increasing its cache_mem instead of seeing
this memory wasted as cache for useless files.

> > > > If you don't believe me, simply allocate 1 GB in a process, then free it.
> > > If that what you said is rigth, day after tomorow I'll have the same
> > > situation - only thing I have changed is kernel. So we'll see. :)
> > 
> > If you encounter it, simply run the tool below with a size in kB. Warning!
> > a wrong parameter associated with improper ulimit will hang your system !
> > Ask it to allocate what you *know* you can free (eg: the swapfree space).
> 
> I don't matter is this memory used for cache or free. I just want to be
> sure that it is not leaking :)

I don't remember when "standard" 2.4 was last seen leaking. By "standard",
I mean without fancy drivers and filesystems. For instance, look at my
outgoing dns+mail+http relay (pentium 133 + 96 MB RAM) :

ns# uptime
  9:05pm  up 466 days, 15:10,  1 user,  load average: 0.04, 0.01, 0.00
          ^^^^^^^^^^^
ns# free
             total       used       free     shared    buffers     cached
Mem:         94180      91820       2360          0      11080       5700
-/+ buffers/cache:      75040      19140
Swap:       524656      27108     497548
ns# uname -rv
2.4.18-wt4 #1 Mon Apr 1 13:57:42 CEST 2002
^^^^^^                                ^^^^
Last built in 2002, nearly 4 years ago. After 466 days uptime, I suspect
I would have noticed it if there were such important leaking ;-)

> Bart?omiej Butyn aka Yoss
> Nie ma tego z?ego co by na gorsze nie wysz?o.

Cheers,
Willy


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2006-02-15 20:12 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-02-13 21:46 Memory leak in 2.4.33-pre1? Yoss
2006-02-14  0:05 ` Willy Tarreau
2006-02-14  0:34   ` Roberto Nibali
2006-02-14  4:50     ` Willy Tarreau
2006-02-14  8:22     ` Yoss
2006-02-14  8:21   ` Yoss
2006-02-14 21:43     ` Willy TARREAU
2006-02-15  9:01       ` Yoss
2006-02-15 20:11         ` Willy Tarreau

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox