* Gigantic memory leak in linux-2.6.[789]!
@ 2004-10-22 14:13 Kristian Sørensen
2004-10-22 14:32 ` Kasper Sandberg
` (2 more replies)
0 siblings, 3 replies; 17+ messages in thread
From: Kristian Sørensen @ 2004-10-22 14:13 UTC (permalink / raw)
To: linux-kernel; +Cc: umbrella
Hi all!
After some more testing after the previous post of the OOPS in
generic_delete_inode, we have now found a gigantic memory leak in Linux 2.6.
[789]. The scenario is the same:
File system: EXT3
Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop:
let "i = 0"
while [ "$i" -lt 10 ]; do
tar jxf linux-2.6.8.1.tar.bz2;
rm -fr linux-2.6.8.1;
let "i = i + 1"
done
When the loop has completed, the system use 124 MB memory more _each_ time....
so it is pretty easy to make a denial-of-service attack :-(
We have tried the same test on a RHEL WS 3 host (running a RedHat 2.4 kernel)
- and there is no problem.
Any deas?
--
Kristian Sørensen
- The Umbrella Project
http://umbrella.sourceforge.net
E-mail: ipqw@users.sf.net, Phone: +45 29723816
^ permalink raw reply [flat|nested] 17+ messages in thread* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-22 14:13 Gigantic memory leak in linux-2.6.[789]! Kristian Sørensen @ 2004-10-22 14:32 ` Kasper Sandberg 2004-10-22 15:07 ` Richard B. Johnson 2004-10-23 0:51 ` David Lang 2004-10-23 1:44 ` Bernd Eckenfels 2 siblings, 1 reply; 17+ messages in thread From: Kasper Sandberg @ 2004-10-22 14:32 UTC (permalink / raw) To: Kristian Sørensen; +Cc: LKML Mailinglist, umbrella On Fri, 2004-10-22 at 16:13 +0200, Kristian Sørensen wrote: > Hi all! > > After some more testing after the previous post of the OOPS in > generic_delete_inode, we have now found a gigantic memory leak in Linux 2.6. > [789]. The scenario is the same: > > File system: EXT3 > Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop: > > let "i = 0" > while [ "$i" -lt 10 ]; do > tar jxf linux-2.6.8.1.tar.bz2; > rm -fr linux-2.6.8.1; > let "i = i + 1" > done > > When the loop has completed, the system use 124 MB memory more _each_ time.... > so it is pretty easy to make a denial-of-service attack :-( well.. i could understand if it used the total size of a unpacked linux kernel, even after the loop stopped, since it would just keep it cached, however, it might not be that case when it adds 124mb each time... > > We have tried the same test on a RHEL WS 3 host (running a RedHat 2.4 kernel) > - and there is no problem. > > > Any deas? > ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-22 14:32 ` Kasper Sandberg @ 2004-10-22 15:07 ` Richard B. Johnson 2004-10-22 15:50 ` Kristian Sørensen 2004-10-22 16:15 ` Gene Heskett 0 siblings, 2 replies; 17+ messages in thread From: Richard B. Johnson @ 2004-10-22 15:07 UTC (permalink / raw) To: Kasper Sandberg; +Cc: Kristian Sørensen, LKML Mailinglist, umbrella [-- Attachment #1: Type: TEXT/PLAIN, Size: 1910 bytes --] On Fri, 22 Oct 2004, Kasper Sandberg wrote: > On Fri, 2004-10-22 at 16:13 +0200, Kristian Sørensen wrote: >> Hi all! >> >> After some more testing after the previous post of the OOPS in >> generic_delete_inode, we have now found a gigantic memory leak in Linux 2.6. >> [789]. The scenario is the same: >> >> File system: EXT3 >> Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop: >> >> let "i = 0" >> while [ "$i" -lt 10 ]; do >> tar jxf linux-2.6.8.1.tar.bz2; >> rm -fr linux-2.6.8.1; >> let "i = i + 1" >> done >> >> When the loop has completed, the system use 124 MB memory more _each_ time.... >> so it is pretty easy to make a denial-of-service attack :-( Do something like this with your favorite kernel version..... while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ; vmstat ; done You can watch this for as long as you want. If there is no other activity, the values reported by vmstat remain, on the average, stable. If you throw in a `sync` command, the values rapidly converge to little memory usage as the disk-data gets flused to disk. > well.. i could understand if it used the total size of a unpacked linux > kernel, even after the loop stopped, since it would just keep it cached, > however, it might not be that case when it adds 124mb each time... >> >> We have tried the same test on a RHEL WS 3 host (running a RedHat 2.4 kernel) >> - and there is no problem. >> >> >> Any deas? >> > > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > Cheers, Dick Johnson Penguin : Linux version 2.6.9 on an i686 machine (5537.79 GrumpyMips). 98.36% of all statistics are fiction. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-22 15:07 ` Richard B. Johnson @ 2004-10-22 15:50 ` Kristian Sørensen 2004-10-22 16:12 ` Richard B. Johnson 2004-10-22 16:15 ` Gene Heskett 1 sibling, 1 reply; 17+ messages in thread From: Kristian Sørensen @ 2004-10-22 15:50 UTC (permalink / raw) To: root; +Cc: Kasper Sandberg, Kristian Sørensen, LKML Mailinglist, umbrella Richard B. Johnson wrote: > On Fri, 22 Oct 2004, Kasper Sandberg wrote: > >> On Fri, 2004-10-22 at 16:13 +0200, Kristian Sørensen wrote: >> >>> Hi all! >>> >>> After some more testing after the previous post of the OOPS in >>> generic_delete_inode, we have now found a gigantic memory leak in >>> Linux 2.6. >>> [789]. The scenario is the same: >>> >>> File system: EXT3 >>> Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop: >>> >>> let "i = 0" >>> while [ "$i" -lt 10 ]; do >>> tar jxf linux-2.6.8.1.tar.bz2; >>> rm -fr linux-2.6.8.1; >>> let "i = i + 1" >>> done >>> >>> When the loop has completed, the system use 124 MB memory more _each_ >>> time.... >>> so it is pretty easy to make a denial-of-service attack :-( > > > > Do something like this with your favorite kernel version..... > > while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ; > vmstat ; done > > You can watch this for as long as you want. If there is no other > activity, the values reported by vmstat remain, on the average, stable. > If you throw in a `sync` command, the values rapidly converge to > little memory usage as the disk-data gets flused to disk. The problem is, that the free memory reported by vmstat is decresing by 124mb for each 10-iterations.... The allocated memory does not get freed even if the system has been left alone for three hours! Cheers, Kristian. -- Kristian Sørensen - The Umbrella Project http://umbrella.sourceforge.net E-mail: ipqw@users.sf.net, Phone: +45 29723816 ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-22 15:50 ` Kristian Sørensen @ 2004-10-22 16:12 ` Richard B. Johnson 2004-10-22 19:24 ` Kristian Sørensen 0 siblings, 1 reply; 17+ messages in thread From: Richard B. Johnson @ 2004-10-22 16:12 UTC (permalink / raw) To: Kristian Sørensen Cc: Kasper Sandberg, Kristian Sørensen, LKML Mailinglist, umbrella [-- Attachment #1: Type: TEXT/PLAIN, Size: 2212 bytes --] On Fri, 22 Oct 2004, [ISO-8859-15] Kristian Sørensen wrote: > Richard B. Johnson wrote: >> On Fri, 22 Oct 2004, Kasper Sandberg wrote: >> >>> On Fri, 2004-10-22 at 16:13 +0200, Kristian Sørensen wrote: >>> >>>> Hi all! >>>> >>>> After some more testing after the previous post of the OOPS in >>>> generic_delete_inode, we have now found a gigantic memory leak in Linux >>>> 2.6. >>>> [789]. The scenario is the same: >>>> >>>> File system: EXT3 >>>> Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop: >>>> >>>> let "i = 0" >>>> while [ "$i" -lt 10 ]; do >>>> tar jxf linux-2.6.8.1.tar.bz2; >>>> rm -fr linux-2.6.8.1; >>>> let "i = i + 1" >>>> done >>>> >>>> When the loop has completed, the system use 124 MB memory more _each_ >>>> time.... >>>> so it is pretty easy to make a denial-of-service attack :-( >> >> >> >> Do something like this with your favorite kernel version..... >> >> while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ; vmstat ; >> done >> >> You can watch this for as long as you want. If there is no other >> activity, the values reported by vmstat remain, on the average, stable. >> If you throw in a `sync` command, the values rapidly converge to >> little memory usage as the disk-data gets flused to disk. > The problem is, that the free memory reported by vmstat is decresing by 124mb > for each 10-iterations.... > > The allocated memory does not get freed even if the system has been left > alone for three hours! > Yes. So? Why would it be freed? It's left how it was until it is needed. Freeing it would waste CPU cycles. This cannot be a problem unless you are inventing some sort of hot-swap memory thing. If so, you need to make a module that tells the kernel memory manager to free everything so you can remove and replace the RAM. > > Cheers, Kristian. > > -- > Kristian Sørensen > - The Umbrella Project > http://umbrella.sourceforge.net > > E-mail: ipqw@users.sf.net, Phone: +45 29723816 > Cheers, Dick Johnson Penguin : Linux version 2.6.9 on an i686 machine (5537.79 GrumpyMips). 98.36% of all statistics are fiction. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-22 16:12 ` Richard B. Johnson @ 2004-10-22 19:24 ` Kristian Sørensen 2004-10-22 19:20 ` Richard B. Johnson 2004-10-22 19:33 ` Chris Friesen 0 siblings, 2 replies; 17+ messages in thread From: Kristian Sørensen @ 2004-10-22 19:24 UTC (permalink / raw) To: root; +Cc: andre, Kasper Sandberg, LKML Mailinglist, umbrella Richard B. Johnson wrote: > On Fri, 22 Oct 2004, [ISO-8859-15] Kristian Sørensen wrote: > >> Richard B. Johnson wrote: >> >>> On Fri, 22 Oct 2004, Kasper Sandberg wrote: >>> >>>> On Fri, 2004-10-22 at 16:13 +0200, Kristian Sørensen wrote: >>>> >>>>> Hi all! >>>>> >>>>> After some more testing after the previous post of the OOPS in >>>>> generic_delete_inode, we have now found a gigantic memory leak in >>>>> Linux 2.6. >>>>> [789]. The scenario is the same: >>>>> >>>>> File system: EXT3 >>>>> Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop: >>>>> >>>>> let "i = 0" >>>>> while [ "$i" -lt 10 ]; do >>>>> tar jxf linux-2.6.8.1.tar.bz2; >>>>> rm -fr linux-2.6.8.1; >>>>> let "i = i + 1" >>>>> done >>>>> >>>>> When the loop has completed, the system use 124 MB memory more >>>>> _each_ time.... >>>>> so it is pretty easy to make a denial-of-service attack :-( >>>> >>> >>> >>> >>> Do something like this with your favorite kernel version..... >>> >>> while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ; >>> vmstat ; done >>> >>> You can watch this for as long as you want. If there is no other >>> activity, the values reported by vmstat remain, on the average, stable. >>> If you throw in a `sync` command, the values rapidly converge to >>> little memory usage as the disk-data gets flused to disk. >> >> The problem is, that the free memory reported by vmstat is decresing >> by 124mb for each 10-iterations.... >> >> The allocated memory does not get freed even if the system has been >> left alone for three hours! >> > > Yes. So? Why would it be freed? It's left how it was until it > is needed. Freeing it would waste CPU cycles. Okay :-) So it looks like two of you says we have been mistaken :-D (and the behaviour has been changed since linux-2.4) Anyway - How does this work in practice? Does the file system implementation use a wrapper for kfree or? Is there any way to force instant free of kernel memory - when freed? Else it is quite hard testing for possible memory leaks in our Umbrella kernel module ... :-/ Best regards, -- Kristian Sørensen - The Umbrella Project http://umbrella.sourceforge.net E-mail: ipqw@users.sf.net, Phone: +45 29723816 ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-22 19:24 ` Kristian Sørensen @ 2004-10-22 19:20 ` Richard B. Johnson 2004-10-22 19:33 ` Chris Friesen 1 sibling, 0 replies; 17+ messages in thread From: Richard B. Johnson @ 2004-10-22 19:20 UTC (permalink / raw) To: Kristian Sørensen; +Cc: andre, Kasper Sandberg, LKML Mailinglist, umbrella [-- Attachment #1: Type: TEXT/PLAIN, Size: 3516 bytes --] On Fri, 22 Oct 2004, [ISO-8859-15] Kristian Sørensen wrote: > Richard B. Johnson wrote: > >> On Fri, 22 Oct 2004, [ISO-8859-15] Kristian Sørensen wrote: >> >>> Richard B. Johnson wrote: >>> >>>> On Fri, 22 Oct 2004, Kasper Sandberg wrote: >>>> >>>>> On Fri, 2004-10-22 at 16:13 +0200, Kristian Sørensen wrote: >>>>> >>>>>> Hi all! >>>>>> >>>>>> After some more testing after the previous post of the OOPS in >>>>>> generic_delete_inode, we have now found a gigantic memory leak in Linux >>>>>> 2.6. >>>>>> [789]. The scenario is the same: >>>>>> >>>>>> File system: EXT3 >>>>>> Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop: >>>>>> >>>>>> let "i = 0" >>>>>> while [ "$i" -lt 10 ]; do >>>>>> tar jxf linux-2.6.8.1.tar.bz2; >>>>>> rm -fr linux-2.6.8.1; >>>>>> let "i = i + 1" >>>>>> done >>>>>> >>>>>> When the loop has completed, the system use 124 MB memory more _each_ >>>>>> time.... >>>>>> so it is pretty easy to make a denial-of-service attack :-( >>>>> >>>> >>>> >>>> >>>> Do something like this with your favorite kernel version..... >>>> >>>> while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ; vmstat >>>> ; done >>>> >>>> You can watch this for as long as you want. If there is no other >>>> activity, the values reported by vmstat remain, on the average, stable. >>>> If you throw in a `sync` command, the values rapidly converge to >>>> little memory usage as the disk-data gets flused to disk. >>> >>> The problem is, that the free memory reported by vmstat is decresing by >>> 124mb for each 10-iterations.... >>> >>> The allocated memory does not get freed even if the system has been left >>> alone for three hours! >>> >> >> Yes. So? Why would it be freed? It's left how it was until it >> is needed. Freeing it would waste CPU cycles. > > Okay :-) So it looks like two of you says we have been mistaken :-D (and the > behaviour has been changed since linux-2.4) > > Anyway - How does this work in practice? Does the file system implementation > use a wrapper for kfree or? > Is there any way to force instant free of kernel memory - when freed? Else it > is quite hard testing for possible memory leaks in our Umbrella kernel module > ... :-/ > > > Best regards, > First, you can always execute sync() and flush most of the file-buffers to disk. This frees up a lot. In the kernel.... If you are doing a lot of kmalloc() allocation and free(), you can write out the pointer values using printk(). I use '0' before such ... printk("0 %p\n", ptr); do `dmesg | sort >xxx.xxx`. Now you can look at the file and see the sorted pointer values. If they repeat, chances are pretty good that you are not leaking memory. In user space... Periodically look at the break address. If it keeps going up, you may have a leak. If it's stable you probably are okay. The only sure way of detecting a memory leak is to use some substitute code (maybe a macro) that substitutes for (intercepts) the allocator and deallocator. It eventually executes the allocator and deallocator after saving information somewhere you define (array, file, etc). You can sort that information and determine if there are as many (k)mallocs() as there are for (z)frees() (for instance) of the same pointer values. Cheers, Dick Johnson Penguin : Linux version 2.6.9 on an i686 machine (5537.79 GrumpyMips). 98.36% of all statistics are fiction. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-22 19:24 ` Kristian Sørensen 2004-10-22 19:20 ` Richard B. Johnson @ 2004-10-22 19:33 ` Chris Friesen 1 sibling, 0 replies; 17+ messages in thread From: Chris Friesen @ 2004-10-22 19:33 UTC (permalink / raw) To: Kristian Sørensen Cc: root, andre, Kasper Sandberg, LKML Mailinglist, umbrella Kristian Sørensen wrote: > Anyway - How does this work in practice? Does the file system > implementation use a wrapper for kfree or? When an app faults in new memory and there is no unused memory, the system will page out apps and/or filesystem data from the page cache so the memory can be given to the app requesting it. > Is there any way to force instant free of kernel memory - when freed? It's not free, it's in use by the page cache. This is a performance feature--we try and keep around as much stuff as possible that might be needed by running apps. > Else it is quite hard testing for possible memory leaks in our Umbrella > kernel module ... :-/ Such is life. As a crude workaround, on a swapless system you can start one or two memory hogs and they will force the system to free up as much memory as possible. Chris ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-22 15:07 ` Richard B. Johnson 2004-10-22 15:50 ` Kristian Sørensen @ 2004-10-22 16:15 ` Gene Heskett 2004-10-22 16:28 ` Andre Tomt 2004-10-22 16:32 ` Chris Friesen 1 sibling, 2 replies; 17+ messages in thread From: Gene Heskett @ 2004-10-22 16:15 UTC (permalink / raw) To: linux-kernel, root; +Cc: Kasper Sandberg, Kristian Sørensen, umbrella On Friday 22 October 2004 11:07, Richard B. Johnson wrote: >while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ; > vmstat ; done Stable, yes. But only after about 3 or 4 iterations. The first 3 rather handily used 500+ megs of memory that I did not get back when I stopped it and cleaned up the mess. -- Cheers, Gene "There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order." -Ed Howdershelt (Author) 99.28% setiathome rank, not too shabby for a WV hillbilly Yahoo.com attorneys please note, additions to this message by Gene Heskett are: Copyright 2004 by Maurice Eugene Heskett, all rights reserved. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-22 16:15 ` Gene Heskett @ 2004-10-22 16:28 ` Andre Tomt 2004-10-22 16:32 ` Chris Friesen 1 sibling, 0 replies; 17+ messages in thread From: Andre Tomt @ 2004-10-22 16:28 UTC (permalink / raw) To: gene.heskett Cc: linux-kernel, root, Kasper Sandberg, Kristian Sørensen, umbrella Gene Heskett wrote: > On Friday 22 October 2004 11:07, Richard B. Johnson wrote: > >>while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ; >>vmstat ; done > > > Stable, yes. But only after about 3 or 4 iterations. The first 3 > rather handily used 500+ megs of memory that I did not get back when > I stopped it and cleaned up the mess. It should get freed when something else needs it. Usually not before. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-22 16:15 ` Gene Heskett 2004-10-22 16:28 ` Andre Tomt @ 2004-10-22 16:32 ` Chris Friesen 1 sibling, 0 replies; 17+ messages in thread From: Chris Friesen @ 2004-10-22 16:32 UTC (permalink / raw) To: gene.heskett Cc: linux-kernel, root, Kasper Sandberg, Kristian Sørensen, umbrella Gene Heskett wrote: > Stable, yes. But only after about 3 or 4 iterations. The first 3 > rather handily used 500+ megs of memory that I did not get back when > I stopped it and cleaned up the mess. Did you run a memory hog to put memory pressure on the system? The following is with 2.6.9-rc4 -bash-2.05b$ while true ; do tar -xjf linux-2.6.7.tar.bz2 ; rm -rf linux-2.6.7 ; vmstat ; done procs memory swap io system cpu r b swpd free buff cache si so bi bo in cs us sy wa id 1 0 0 1675768 104004 112576 0 0 0 1 11 2 0 0 0 10 procs memory swap io system cpu r b swpd free buff cache si so bi bo in cs us sy wa id 1 1 0 1649032 110792 112724 0 0 0 1 11 3 0 0 0 10 procs memory swap io system cpu r b swpd free buff cache si so bi bo in cs us sy wa id 1 0 0 1630472 118580 112620 0 0 0 2 11 3 0 0 0 10 procs memory swap io system cpu r b swpd free buff cache si so bi bo in cs us sy wa id 1 0 0 1607560 125500 112636 0 0 0 2 11 3 0 0 0 10 After running a memory hog, -bash-2.05b$ vmstat procs memory swap io system cpu r b swpd free buff cache si so bi bo in cs us sy wa id 0 0 0 1890248 672 4836 0 0 0 3 11 3 0 0 0 10 Looks like the cached memory all got freed, which is exactly as expected. Chris ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-22 14:13 Gigantic memory leak in linux-2.6.[789]! Kristian Sørensen 2004-10-22 14:32 ` Kasper Sandberg @ 2004-10-23 0:51 ` David Lang 2004-10-24 14:14 ` Bill Davidsen 2004-10-23 1:44 ` Bernd Eckenfels 2 siblings, 1 reply; 17+ messages in thread From: David Lang @ 2004-10-23 0:51 UTC (permalink / raw) To: Kristian Sørensen; +Cc: linux-kernel, umbrella On Fri, 22 Oct 2004, Kristian Sørensen wrote: > Hi all! > > After some more testing after the previous post of the OOPS in > generic_delete_inode, we have now found a gigantic memory leak in Linux 2.6. > [789]. The scenario is the same: <SNIP> > When the loop has completed, the system use 124 MB memory more _each_ time.... > so it is pretty easy to make a denial-of-service attack :-( > > We have tried the same test on a RHEL WS 3 host (running a RedHat 2.4 kernel) > - and there is no problem. > > > Any deas? > > -- > Kristian Sørensen > - The Umbrella Project > http://umbrella.sourceforge.net This is a common mistake that many people make when first looking at the Linux stats. Linux starts off with most of the memory free, but rapidly uses it up. it keeps a small amount (a few megs) free at all times, but for the rest counts of freeing memory (possibly by swapping) when a new program asks for memory and there is less then the minimum amount left free. It does this becouse there is a chance that the memory will be re-used (in your example where you were untaring the kernel source there is a chance that someone else would be reading that source and if they did it would already be in memory and not have to be re-read from disk) and becouse there is a chance that nothing will ever need to use that memory before the computer is shut off so it would be a waste of time to do the free (which includes zeroing out the memory, not just marking it as available). This puts the cost of zeroing out and freeing memory on new programs that are allocating memory, which tends to scatter the work over time rather then having a large burst of work kick in when a program exits (it seems odd to think that if a large computer exits the machine would be pegged for a little while while it frees up and zeros the memory, not exactly what you would expect when you killed a program :-) David Lang -- There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies. -- C.A.R. Hoare ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-23 0:51 ` David Lang @ 2004-10-24 14:14 ` Bill Davidsen 2004-10-24 16:04 ` Tommy Reynolds 2004-10-25 22:47 ` David Lang 0 siblings, 2 replies; 17+ messages in thread From: Bill Davidsen @ 2004-10-24 14:14 UTC (permalink / raw) To: linux-kernel David Lang wrote: > This puts the cost of zeroing out and freeing memory on new programs > that are allocating memory, which tends to scatter the work over time > rather then having a large burst of work kick in when a program exits > (it seems odd to think that if a large computer exits the machine would > be pegged for a little while while it frees up and zeros the memory, not > exactly what you would expect when you killed a program :-) Any this partially explains why response is bad every morning when starting daily operation. Instead of using the totally unproductive time in the idle loop to zero and free those pages when it would not hurt response, the kernel saves that work for the next time the memory is needed lest it do work which might not be needed before the system is shutdown. With all the work Nick, Ingo,Con and others are putting into latency and responsiveness, I don't understand why anyone thinks this is desirable behavior. The idle loop is the perfect place to perform things like this, to convert non-productive cycles into performing tasks which will directly improve response and performance when the task MUST be done. Things like zeroing these pages, perhaps defragmenting memory, anything which can be done in small parts. It would seem that doing things like this in small inefficient steps in idle moments is still better than doing them efficiently while a process is waiting for the resources being freed. -- bill davidsen <davidsen@tmr.com> CTO TMR Associates, Inc Doing interesting things with small computers since 1979 ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-24 14:14 ` Bill Davidsen @ 2004-10-24 16:04 ` Tommy Reynolds 2004-10-25 22:11 ` Bill Davidsen 2004-10-25 22:47 ` David Lang 1 sibling, 1 reply; 17+ messages in thread From: Tommy Reynolds @ 2004-10-24 16:04 UTC (permalink / raw) To: Bill Davidsen; +Cc: linux-kernel [-- Attachment #1: Type: text/plain, Size: 685 bytes --] Uttered Bill Davidsen <davidsen@tmr.com>, spake thus: > With all the work Nick, Ingo,Con and others are putting into latency and > responsiveness, I don't understand why anyone thinks this is desirable > behavior. The idle loop is the perfect place to perform things like > this, to convert non-productive cycles into performing tasks which will > directly improve response and performance when the task MUST be done. Bill, with respect, The idle loop is, by definition, the place to go when there is nothing else to do. Scrubbing memory is, by definition, not "nothing", so leave the idle loop alone. That's why God, or maybe it was Linus, invented kernel threads. Cheers! [-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-24 16:04 ` Tommy Reynolds @ 2004-10-25 22:11 ` Bill Davidsen 0 siblings, 0 replies; 17+ messages in thread From: Bill Davidsen @ 2004-10-25 22:11 UTC (permalink / raw) To: Tommy Reynolds; +Cc: linux-kernel Tommy Reynolds wrote: > Uttered Bill Davidsen <davidsen@tmr.com>, spake thus: > > >>With all the work Nick, Ingo,Con and others are putting into latency and >>responsiveness, I don't understand why anyone thinks this is desirable >>behavior. The idle loop is the perfect place to perform things like >>this, to convert non-productive cycles into performing tasks which will >>directly improve response and performance when the task MUST be done. > > > Bill, with respect, > > The idle loop is, by definition, the place to go when there is > nothing else to do. Scrubbing memory is, by definition, not > "nothing", so leave the idle loop alone. > > That's why God, or maybe it was Linus, invented kernel threads. Did you really not know what I meant here, or are you being pedantic about the nomenclature? Yes, obviously implement by thread(s) with priority lower than whale shit, the object of which is to do the work when no process is waiting for the CPU, and in very small steps so the CPU isn't tied up. -- -bill davidsen (davidsen@tmr.com) "The secret to procrastination is to put things off until the last possible moment - but no longer" -me ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-24 14:14 ` Bill Davidsen 2004-10-24 16:04 ` Tommy Reynolds @ 2004-10-25 22:47 ` David Lang 1 sibling, 0 replies; 17+ messages in thread From: David Lang @ 2004-10-25 22:47 UTC (permalink / raw) To: Bill Davidsen; +Cc: linux-kernel On Sun, 24 Oct 2004, Bill Davidsen wrote: > David Lang wrote: > >> This puts the cost of zeroing out and freeing memory on new programs that >> are allocating memory, which tends to scatter the work over time rather >> then having a large burst of work kick in when a program exits (it seems >> odd to think that if a large computer exits the machine would be pegged >> for a little while while it frees up and zeros the memory, not exactly >> what you would expect when you killed a program :-) > > Any this partially explains why response is bad every morning when starting > daily operation. Instead of using the totally unproductive time in the idle > loop to zero and free those pages when it would not hurt response, the kernel > saves that work for the next time the memory is needed lest it do work which > might not be needed before the system is shutdown. actually, what useually has happened is that updatedb ran overnight and used all your memory for it's work so all your application stuff got thrown away or swapped out as it appeared to be less useful then the then-active process. so first thing in the morning you need to do a lot of disk reads to get your desktop working set into memory. the cost of zeroing the pages is minor compared to the disk IO > With all the work Nick, Ingo,Con and others are putting into latency and > responsiveness, I don't understand why anyone thinks this is desirable > behavior. The idle loop is the perfect place to perform things like this, to > convert non-productive cycles into performing tasks which will directly > improve response and performance when the task MUST be done. Things like > zeroing these pages, perhaps defragmenting memory, anything which can be done > in small parts. > > It would seem that doing things like this in small inefficient steps in idle > moments is still better than doing them efficiently while a process is > waiting for the resources being freed. the problem is that you don't know that you need to throw away the data. the next thing that you try to do could re-use the data that's in the ram, how can the system know? David Lang -- There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies. -- C.A.R. Hoare ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Gigantic memory leak in linux-2.6.[789]! 2004-10-22 14:13 Gigantic memory leak in linux-2.6.[789]! Kristian Sørensen 2004-10-22 14:32 ` Kasper Sandberg 2004-10-23 0:51 ` David Lang @ 2004-10-23 1:44 ` Bernd Eckenfels 2 siblings, 0 replies; 17+ messages in thread From: Bernd Eckenfels @ 2004-10-23 1:44 UTC (permalink / raw) To: linux-kernel In article <200410221613.35913.ks@cs.aau.dk> you wrote: > When the loop has completed, the system use 124 MB memory more _each_ time.... > so it is pretty easy to make a denial-of-service attack :-( for starters i recommend to look at "free" and only at the marked number: total used free shared buffers cached Mem: 126368 108432 17936 0 6532 42104 -/+ buffers/cache: 59796* 66572* Swap: 262128 43400 218728 or at the swap numbers if you have low memory (like i do). Gruss Bernd -- eckes privat - http://www.eckes.org/ Project Freefire - http://www.freefire.org/ ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2004-10-26 4:46 UTC | newest] Thread overview: 17+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2004-10-22 14:13 Gigantic memory leak in linux-2.6.[789]! Kristian Sørensen 2004-10-22 14:32 ` Kasper Sandberg 2004-10-22 15:07 ` Richard B. Johnson 2004-10-22 15:50 ` Kristian Sørensen 2004-10-22 16:12 ` Richard B. Johnson 2004-10-22 19:24 ` Kristian Sørensen 2004-10-22 19:20 ` Richard B. Johnson 2004-10-22 19:33 ` Chris Friesen 2004-10-22 16:15 ` Gene Heskett 2004-10-22 16:28 ` Andre Tomt 2004-10-22 16:32 ` Chris Friesen 2004-10-23 0:51 ` David Lang 2004-10-24 14:14 ` Bill Davidsen 2004-10-24 16:04 ` Tommy Reynolds 2004-10-25 22:11 ` Bill Davidsen 2004-10-25 22:47 ` David Lang 2004-10-23 1:44 ` Bernd Eckenfels
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).