linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* VM/VFS bug with large amount of memory and file systems?
@ 2007-09-15  7:27 Anton Altaparmakov
  2007-09-15 10:08 ` Peter Zijlstra
  2007-10-03 10:51 ` Pavel Machek
  0 siblings, 2 replies; 24+ messages in thread
From: Anton Altaparmakov @ 2007-09-15  7:27 UTC (permalink / raw)
  To: Linux Kernel Mailing List, Linux Memory Management List; +Cc: marc.smith

Hi,

Mark Smith reported a OOM condition when he copies a large (46GiB)  
file from an NTFS partition (using the stock kernel driver) to /dev/ 
null (or to a file on ext3, same result).

The machine this runs on has an i386 kernel with 12GiB RAM (yes this  
is not a typo it is 12GiB!).

When you do:

escabot CAD # ls -ltrh
total 49G
-r-------- 1 root root 2.0G Oct  7  2004 RTC26001.GHS
-r-------- 1 root root 2.0G Oct  7  2004 RTC26002.GHS
-r-------- 1 root root 2.0G Oct  7  2004 RTC2604.GHO
-r-------- 1 root root 620M Oct  7  2004 RTC26003.GHS
-r-------- 1 root root    4 Sep 14 09:32 asdf.txt
-r-------- 1 root root    8 Sep 14 09:33 rty.txt
-r-------- 1 root root  42G Sep 14 10:36 bigfile.gho
escabot CAD # cp bigfile.gho /dev/null

It dies with OOM and eventual panic with the message: "Kernel panic -  
not syncing: Out of memory and no killable processes."

If Mark boots with "mem=2G" on the kernel command line and he then  
repeat thes above copy, then the copy succeeds just fine so there is  
nothing inherently wrong with NTFS.

So the OOM happens with 12GiB RAM but not with 2GiB RAM.  I guess  
this is why I have never seen this in my testing.  My laptop only has  
2GiB RAM...

Below are relevant parts of the dmesg output that Mark sent that show  
what the kernel messages are when this happens.  (I have taken out  
the NTFS debug output to make the log shorter but that does not show  
anything untoward and I can send it on request but it really just  
shows the file being read in and that's all!)

Does anyone have any ideas / want to work with Mark (he is CC:-ed) to  
try and figure out the bug and get it fixed?  I am sure Mark will be  
happy to provide further information/try things out...

If it turns out to be something NTFS is doing wrong I am happy to try  
and fix it but at the moment I assume it is something the VM is doing  
wrong when there is so much RAM available...

Best regards,

	Anton
-- 
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/

<quote from dmesg output>
Sep 13 15:00:22 escabot Linux version 2.6.21-gentoo-r4 (root@escabot)  
(gcc version 4.1.2 (Gentoo 4.1.2 p1.0.1)) #3 SMP PREEMPT Thu Sep 13  
14:44:12 EDT 2007
Sep 13 15:00:22 escabot BIOS-provided physical RAM map:
Sep 13 15:00:22 escabot sanitize start
Sep 13 15:00:22 escabot sanitize end
Sep 13 15:00:22 escabot copy_e820_map() start: 0000000000000000 size:  
000000000009ec00 end: 000000000009ec00 type: 1
Sep 13 15:00:22 escabot copy_e820_map() type is E820_RAM
Sep 13 15:00:22 escabot copy_e820_map() start: 000000000009ec00 size:  
0000000000001400 end: 00000000000a0000 type: 2
Sep 13 15:00:22 escabot copy_e820_map() start: 00000000000f0000 size:  
0000000000010000 end: 0000000000100000 type: 2
Sep 13 15:00:22 escabot copy_e820_map() start: 0000000000100000 size:  
00000000f56f8000 end: 00000000f57f8000 type: 1
Sep 13 15:00:22 escabot copy_e820_map() type is E820_RAM
Sep 13 15:00:22 escabot copy_e820_map() start: 00000000f57f8000 size:  
0000000000008000 end: 00000000f5800000 type: 3
Sep 13 15:00:22 escabot copy_e820_map() start: 00000000fdc00000 size:  
0000000000001000 end: 00000000fdc01000 type: 2
Sep 13 15:00:22 escabot copy_e820_map() start: 00000000fdc10000 size:  
0000000000001000 end: 00000000fdc11000 type: 2
Sep 13 15:00:22 escabot copy_e820_map() start: 00000000fec00000 size:  
0000000000001000 end: 00000000fec01000 type: 2
Sep 13 15:00:22 escabot copy_e820_map() start: 00000000fec10000 size:  
0000000000001000 end: 00000000fec11000 type: 2
Sep 13 15:00:22 escabot copy_e820_map() start: 00000000fec20000 size:  
0000000000001000 end: 00000000fec21000 type: 2
Sep 13 15:00:22 escabot copy_e820_map() start: 00000000fee00000 size:  
0000000000010000 end: 00000000fee10000 type: 2
Sep 13 15:00:22 escabot copy_e820_map() start: 00000000ff800000 size:  
0000000000800000 end: 0000000100000000 type: 2
Sep 13 15:00:22 escabot copy_e820_map() start: 0000000100000000 size:  
00000001fffff000 end: 00000002fffff000 type: 1
Sep 13 15:00:22 escabot copy_e820_map() type is E820_RAM
Sep 13 15:00:22 escabot BIOS-e820: 0000000000000000 -  
000000000009ec00 (usable)
Sep 13 15:00:22 escabot BIOS-e820: 000000000009ec00 -  
00000000000a0000 (reserved)
Sep 13 15:00:22 escabot BIOS-e820: 00000000000f0000 -  
0000000000100000 (reserved)
Sep 13 15:00:22 escabot BIOS-e820: 0000000000100000 -  
00000000f57f8000 (usable)
Sep 13 15:00:22 escabot BIOS-e820: 00000000f57f8000 -  
00000000f5800000 (ACPI data)
Sep 13 15:00:22 escabot BIOS-e820: 00000000fdc00000 -  
00000000fdc01000 (reserved)
Sep 13 15:00:22 escabot BIOS-e820: 00000000fdc10000 -  
00000000fdc11000 (reserved)
Sep 13 15:00:22 escabot BIOS-e820: 00000000fec00000 -  
00000000fec01000 (reserved)
Sep 13 15:00:22 escabot BIOS-e820: 00000000fec10000 -  
00000000fec11000 (reserved)
Sep 13 15:00:22 escabot BIOS-e820: 00000000fec20000 -  
00000000fec21000 (reserved)
Sep 13 15:00:22 escabot BIOS-e820: 00000000fee00000 -  
00000000fee10000 (reserved)
Sep 13 15:00:22 escabot BIOS-e820: 00000000ff800000 -  
0000000100000000 (reserved)
Sep 13 15:00:22 escabot BIOS-e820: 0000000100000000 -  
00000002fffff000 (usable)
Sep 13 15:00:22 escabot 11391MB HIGHMEM available.
Sep 13 15:00:22 escabot 896MB LOWMEM available.
Sep 13 15:00:22 escabot found SMP MP-table at 000f4fa0
Sep 13 15:00:22 escabot NX (Execute Disable) protection: active
Sep 13 15:00:22 escabot Entering add_active_range(0, 0, 3145727) 0  
entries of 256 used
Sep 13 15:00:22 escabot Zone PFN ranges:
Sep 13 15:00:22 escabot DMA             0 ->     4096
Sep 13 15:00:22 escabot Normal       4096 ->   229376
Sep 13 15:00:22 escabot HighMem    229376 ->  3145727
Sep 13 15:00:22 escabot early_node_map[1] active PFN ranges
Sep 13 15:00:22 escabot 0:        0 ->  3145727
Sep 13 15:00:22 escabot On node 0 totalpages: 3145727
Sep 13 15:00:22 escabot DMA zone: 32 pages used for memmap
Sep 13 15:00:22 escabot DMA zone: 0 pages reserved
Sep 13 15:00:22 escabot DMA zone: 4064 pages, LIFO batch:0
Sep 13 15:00:22 escabot Normal zone: 1760 pages used for memmap
Sep 13 15:00:22 escabot Normal zone: 223520 pages, LIFO batch:31
Sep 13 15:00:22 escabot HighMem zone: 22783 pages used for memmap
Sep 13 15:00:22 escabot HighMem zone: 2893568 pages, LIFO batch:31
Sep 13 15:00:22 escabot DMI 2.3 present.
Sep 13 15:00:22 escabot ACPI: RSDP 000F4F20, 0024 (r2 HP    )
Sep 13 15:00:22 escabot ACPI: XSDT F57F83E0, 0044 (r1 HP      
A05             2   ?\x04     162E)
Sep 13 15:00:22 escabot ACPI: FACP F57F8460, 00F4 (r3 HP      
A05             2   ?\x04     162E)
Sep 13 15:00:22 escabot ACPI: DSDT F57F8560, 422D (r1 HP          
DSDT        1 MSFT  2000001)
Sep 13 15:00:22 escabot ACPI: FACS F57F80C0, 0040
Sep 13 15:00:22 escabot ACPI: APIC F57F8100, 00B8 (r1 HP      
00000083        2             0)
Sep 13 15:00:22 escabot ACPI: SPCR F57F81E0, 0050 (r1 HP      
SPCRRBSU        1   ?\x04     162E)
Sep 13 15:00:22 escabot ACPI: SRAT F57F8260, 0150 (r1 HP      
A05             1             0)
Sep 13 15:00:22 escabot ACPI: PM-Timer IO Port: 0x908
Sep 13 15:00:22 escabot ACPI: Local APIC address 0xfee00000
Sep 13 15:00:22 escabot ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00]  
enabled)
Sep 13 15:00:22 escabot Processor #0 15:1 APIC version 16
Sep 13 15:00:22 escabot ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02]  
enabled)
Sep 13 15:00:22 escabot Processor #2 15:1 APIC version 16
Sep 13 15:00:22 escabot ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04]  
disabled)
Sep 13 15:00:22 escabot ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06]  
disabled)
Sep 13 15:00:22 escabot ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01]  
enabled)
Sep 13 15:00:22 escabot Processor #1 15:1 APIC version 16
Sep 13 15:00:22 escabot ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03]  
enabled)
Sep 13 15:00:22 escabot Processor #3 15:1 APIC version 16
Sep 13 15:00:22 escabot ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05]  
disabled)
Sep 13 15:00:22 escabot ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07]  
disabled)
Sep 13 15:00:22 escabot ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint 
[0x1])
Sep 13 15:00:22 escabot ACPI: IOAPIC (id[0x04] address[0xfec00000]  
gsi_base[0])
Sep 13 15:00:22 escabot IOAPIC[0]: apic_id 4, version 17, address  
0xfec00000, GSI 0-23
Sep 13 15:00:22 escabot ACPI: IOAPIC (id[0x05] address[0xfec10000]  
gsi_base[24])
Sep 13 15:00:22 escabot IOAPIC[1]: apic_id 5, version 17, address  
0xfec10000, GSI 24-27
Sep 13 15:00:22 escabot ACPI: IOAPIC (id[0x06] address[0xfec20000]  
gsi_base[28])
Sep 13 15:00:22 escabot IOAPIC[2]: apic_id 6, version 17, address  
0xfec20000, GSI 28-31
Sep 13 15:00:22 escabot ACPI: IOAPIC (id[0x07] address[0xfdc00000]  
gsi_base[32])
Sep 13 15:00:22 escabot IOAPIC[3]: apic_id 7, version 17, address  
0xfdc00000, GSI 32-35
Sep 13 15:00:22 escabot ACPI: IOAPIC (id[0x08] address[0xfdc10000]  
gsi_base[36])
Sep 13 15:00:22 escabot IOAPIC[4]: apic_id 8, version 17, address  
0xfdc10000, GSI 36-39
Sep 13 15:00:22 escabot ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq  
2 high edge)
Sep 13 15:00:22 escabot ACPI: IRQ0 used by override.
Sep 13 15:00:22 escabot ACPI: IRQ2 used by override.
Sep 13 15:00:22 escabot ACPI: IRQ9 used by override.
Sep 13 15:00:22 escabot Enabling APIC mode:  Flat.  Using 5 I/O APICs
Sep 13 15:00:22 escabot Using ACPI (MADT) for SMP configuration  
information
Sep 13 15:00:22 escabot Allocating PCI resources starting at f6000000  
(gap: f5800000:08400000)
Sep 13 15:00:22 escabot Built 1 zonelists.  Total pages: 3121152
Sep 13 15:00:22 escabot Kernel command line: debug_msgs=1
Sep 13 15:00:22 escabot mapped APIC to ffffd000 (fee00000)
Sep 13 15:00:22 escabot mapped IOAPIC to ffffc000 (fec00000)
Sep 13 15:00:22 escabot mapped IOAPIC to ffffb000 (fec10000)
Sep 13 15:00:22 escabot mapped IOAPIC to ffffa000 (fec20000)
Sep 13 15:00:22 escabot mapped IOAPIC to ffff9000 (fdc00000)
Sep 13 15:00:22 escabot mapped IOAPIC to ffff8000 (fdc10000)
Sep 13 15:00:22 escabot Enabling fast FPU save and restore... done.
Sep 13 15:00:22 escabot Enabling unmasked SIMD FPU exception  
support... done.
Sep 13 15:00:22 escabot Initializing CPU#0
Sep 13 15:00:22 escabot PID hash table entries: 4096 (order: 12,  
16384 bytes)
Sep 13 15:00:22 escabot Detected 2405.604 MHz processor.
Sep 13 15:00:22 escabot Console: colour VGA+ 80x25
Sep 13 15:00:22 escabot Dentry cache hash table entries: 131072  
(order: 7, 524288 bytes)
Sep 13 15:00:22 escabot Inode-cache hash table entries: 65536 (order:  
6, 262144 bytes)
Sep 13 15:00:22 escabot Memory: 12304772k/12582908k available (3439k  
kernel code, 104940k reserved, 977k data, 240k init, 11493340k highmem)
Sep 13 15:00:22 escabot virtual kernel memory layout:
Sep 13 15:00:22 escabot fixmap  : 0xfff4f000 - 0xfffff000   ( 704 kB)
Sep 13 15:00:22 escabot pkmap   : 0xffc00000 - 0xffe00000   (2048 kB)
Sep 13 15:00:22 escabot vmalloc : 0xf8800000 - 0xffbfe000   ( 115 MB)
Sep 13 15:00:22 escabot lowmem  : 0xc0000000 - 0xf8000000   ( 896 MB)
Sep 13 15:00:22 escabot .init : 0xc0558000 - 0xc0594000   ( 240 kB)
Sep 13 15:00:22 escabot .data : 0xc045bce5 - 0xc055014c   ( 977 kB)
Sep 13 15:00:22 escabot .text : 0xc0100000 - 0xc045bce5   (3439 kB)
Sep 13 15:00:22 escabot Checking if this processor honours the WP bit  
even in supervisor mode... Ok.
Sep 13 15:00:22 escabot Calibrating delay using timer specific  
routine.. 4815.63 BogoMIPS (lpj=9631263)
Sep 13 15:00:22 escabot Mount-cache hash table entries: 512
Sep 13 15:00:22 escabot CPU: After generic identify, caps: 178bfbff  
e3d3fbff 00000000 00000000 00000001 00000000 00000002
Sep 13 15:00:22 escabot CPU: L1 I Cache: 64K (64 bytes/line), D cache  
64K (64 bytes/line)
Sep 13 15:00:22 escabot CPU: L2 Cache: 1024K (64 bytes/line)
Sep 13 15:00:22 escabot CPU 0(2) -> Core 0
Sep 13 15:00:22 escabot CPU: After all inits, caps: 178bfbff e3d3fbff  
00000000 00000410 00000001 00000000 00000002
Sep 13 15:00:22 escabot Intel machine check architecture supported.
Sep 13 15:00:22 escabot Intel machine check reporting enabled on CPU#0.
Sep 13 15:00:22 escabot Compat vDSO mapped to ffffe000.
Sep 13 15:00:22 escabot Checking 'hlt' instruction... OK.
Sep 13 15:00:22 escabot Freeing SMP alternatives: 15k freed
Sep 13 15:00:22 escabot ACPI: Core revision 20070126
Sep 13 15:00:22 escabot CPU0: AMD Opteron(tm) Processor 280 stepping 02
Sep 13 15:00:22 escabot Booting processor 1/1 eip 2000
Sep 13 15:00:22 escabot Initializing CPU#1
Sep 13 15:00:22 escabot Calibrating delay using timer specific  
routine.. 4811.22 BogoMIPS (lpj=9622459)
Sep 13 15:00:22 escabot CPU: After generic identify, caps: 178bfbff  
e3d3fbff 00000000 00000000 00000001 00000000 00000002
Sep 13 15:00:22 escabot CPU: L1 I Cache: 64K (64 bytes/line), D cache  
64K (64 bytes/line)
Sep 13 15:00:22 escabot CPU: L2 Cache: 1024K (64 bytes/line)
Sep 13 15:00:22 escabot CPU 1(2) -> Core 1
Sep 13 15:00:22 escabot CPU: After all inits, caps: 178bfbff e3d3fbff  
00000000 00000410 00000001 00000000 00000002
Sep 13 15:00:22 escabot Intel machine check architecture supported.
Sep 13 15:00:22 escabot Intel machine check reporting enabled on CPU#1.
Sep 13 15:00:22 escabot CPU1: AMD Opteron(tm) Processor 280 stepping 02
Sep 13 15:00:22 escabot Booting processor 2/2 eip 2000
Sep 13 15:00:22 escabot Initializing CPU#2
Sep 13 15:00:22 escabot Calibrating delay using timer specific  
routine.. 4811.19 BogoMIPS (lpj=9622384)
Sep 13 15:00:22 escabot CPU: After generic identify, caps: 178bfbff  
e3d3fbff 00000000 00000000 00000001 00000000 00000002
Sep 13 15:00:22 escabot CPU: L1 I Cache: 64K (64 bytes/line), D cache  
64K (64 bytes/line)
Sep 13 15:00:22 escabot CPU: L2 Cache: 1024K (64 bytes/line)
Sep 13 15:00:22 escabot CPU 2(2) -> Core 0
Sep 13 15:00:22 escabot CPU: After all inits, caps: 178bfbff e3d3fbff  
00000000 00000410 00000001 00000000 00000002
Sep 13 15:00:22 escabot Intel machine check architecture supported.
Sep 13 15:00:22 escabot Intel machine check reporting enabled on CPU#2.
Sep 13 15:00:22 escabot CPU2: AMD Opteron(tm) Processor 280 stepping 02
Sep 13 15:00:22 escabot Booting processor 3/3 eip 2000
Sep 13 15:00:22 escabot Initializing CPU#3
Sep 13 15:00:22 escabot Calibrating delay using timer specific  
routine.. 4811.27 BogoMIPS (lpj=9622546)
Sep 13 15:00:22 escabot CPU: After generic identify, caps: 178bfbff  
e3d3fbff 00000000 00000000 00000001 00000000 00000002
Sep 13 15:00:22 escabot CPU: L1 I Cache: 64K (64 bytes/line), D cache  
64K (64 bytes/line)
Sep 13 15:00:22 escabot CPU: L2 Cache: 1024K (64 bytes/line)
Sep 13 15:00:22 escabot CPU 3(2) -> Core 1
Sep 13 15:00:22 escabot CPU: After all inits, caps: 178bfbff e3d3fbff  
00000000 00000410 00000001 00000000 00000002
Sep 13 15:00:22 escabot Intel machine check architecture supported.
Sep 13 15:00:22 escabot Intel machine check reporting enabled on CPU#3.
Sep 13 15:00:22 escabot CPU3: AMD Opteron(tm) Processor 280 stepping 02
Sep 13 15:00:22 escabot Total of 4 processors activated (19249.32  
BogoMIPS).
Sep 13 15:00:22 escabot ENABLING IO-APIC IRQs
Sep 13 15:00:22 escabot ..TIMER: vector=0x31 apic1=0 pin1=2 apic2=0  
pin2=0
Sep 13 15:00:22 escabot Brought up 4 CPUs
Sep 13 15:00:22 escabot migration_cost=4000
Sep 13 15:00:22 escabot NET: Registered protocol family 16
Sep 13 15:00:22 escabot ACPI: bus type pci registered
Sep 13 15:00:22 escabot PCI: PCI BIOS revision 2.10 entry at 0xf0096,  
last bus=6
Sep 13 15:00:22 escabot PCI: Using configuration type 1
Sep 13 15:00:22 escabot Setting up standard PCI resources
Sep 13 15:00:22 escabot ACPI: SSDT F57FD000, 059D (r1 HP         
SSDT0        1 MSFT  2000001)
Sep 13 15:00:22 escabot ACPI: SSDT F57FD700, 059D (r1 HP         
SSDT1        1 MSFT  2000001)
Sep 13 15:00:22 escabot ACPI: SSDT F57FDE00, 059D (r1 HP         
SSDT2        1 MSFT  2000001)
Sep 13 15:00:22 escabot ACPI: SSDT F57FE500, 059D (r1 HP         
SSDT3        1 MSFT  2000001)
Sep 13 15:00:22 escabot ACPI: Interpreter enabled
Sep 13 15:00:22 escabot ACPI: Using IOAPIC for interrupt routing
Sep 13 15:00:22 escabot ACPI: PCI Root Bridge [CFG0] (0000:00)
Sep 13 15:00:22 escabot PCI: Probing PCI hardware (bus 00)
Sep 13 15:00:22 escabot Boot video device is 0000:01:03.0
Sep 13 15:00:22 escabot ACPI: PCI Interrupt Routing Table  
[\_SB_.CFG0._PRT]
Sep 13 15:00:22 escabot ACPI: PCI Interrupt Routing Table  
[\_SB_.CFG0.PCI0._PRT]
Sep 13 15:00:22 escabot ACPI: PCI Interrupt Routing Table  
[\_SB_.CFG0.PCI1._PRT]
Sep 13 15:00:22 escabot ACPI: PCI Interrupt Routing Table  
[\_SB_.CFG0.PCI2._PRT]
Sep 13 15:00:22 escabot ACPI: PCI Root Bridge [CFG1] (0000:04)
Sep 13 15:00:22 escabot PCI: Probing PCI hardware (bus 04)
Sep 13 15:00:22 escabot ACPI: PCI Interrupt Routing Table  
[\_SB_.CFG1.PCI3._PRT]
Sep 13 15:00:22 escabot ACPI: PCI Interrupt Routing Table  
[\_SB_.CFG1.PCI4._PRT]
Sep 13 15:00:22 escabot ACPI: PCI Interrupt Link [LNKA] (IRQs 3 5 *7  
10 11)
Sep 13 15:00:22 escabot ACPI: PCI Interrupt Link [LNKB] (IRQs 3 5 7  
10 11) *15
Sep 13 15:00:22 escabot ACPI: PCI Interrupt Link [LNKC] (IRQs 3 5 7  
*10 11)
Sep 13 15:00:22 escabot ACPI: PCI Interrupt Link [LNKD] (IRQs 3 *5 7  
10 11)
Sep 13 15:00:22 escabot Linux Plug and Play Support v0.97 (c) Adam Belay
Sep 13 15:00:22 escabot pnp: PnP ACPI init
Sep 13 15:00:22 escabot pnp: PnP ACPI: found 10 devices
Sep 13 15:00:22 escabot SCSI subsystem initialized
Sep 13 15:00:22 escabot usbcore: registered new interface driver usbfs
Sep 13 15:00:22 escabot usbcore: registered new interface driver hub
Sep 13 15:00:22 escabot usbcore: registered new device driver usb
Sep 13 15:00:22 escabot PCI: Using ACPI for IRQ routing
Sep 13 15:00:22 escabot PCI: If a device doesn't work, try  
"pci=routeirq".  If it helps, post a report
Sep 13 15:00:22 escabot pnp: 00:01: ioport range 0x230-0x233 has been  
reserved
Sep 13 15:00:22 escabot pnp: 00:01: ioport range 0x260-0x267 has been  
reserved
Sep 13 15:00:22 escabot pnp: 00:01: ioport range 0x4d0-0x4d1 has been  
reserved
Sep 13 15:00:22 escabot PCI: Bridge: 0000:00:03.0
Sep 13 15:00:22 escabot IO window: 4000-4fff
Sep 13 15:00:22 escabot MEM window: f5f00000-f7bfffff
Sep 13 15:00:22 escabot PREFETCH window: f8000000-f80fffff
Sep 13 15:00:22 escabot PCI: Bridge: 0000:00:07.0
Sep 13 15:00:22 escabot IO window: 5000-5fff
Sep 13 15:00:22 escabot MEM window: f7c00000-f7cfffff
Sep 13 15:00:22 escabot PREFETCH window: f8100000-f81fffff
Sep 13 15:00:22 escabot Time: acpi_pm clocksource has been installed.
Sep 13 15:00:22 escabot PCI: Bridge: 0000:00:08.0
Sep 13 15:00:22 escabot IO window: disabled.
Sep 13 15:00:22 escabot MEM window: f7d00000-f7dfffff
Sep 13 15:00:22 escabot PREFETCH window: disabled.
Sep 13 15:00:22 escabot PCI: Bridge: 0000:04:09.0
Sep 13 15:00:22 escabot IO window: 6000-6fff
Sep 13 15:00:22 escabot MEM window: f7e00000-f7efffff
Sep 13 15:00:22 escabot PREFETCH window: f8200000-f82fffff
Sep 13 15:00:22 escabot PCI: Bridge: 0000:04:0a.0
Sep 13 15:00:22 escabot IO window: 7000-7fff
Sep 13 15:00:22 escabot MEM window: f7f00000-f7ffffff
Sep 13 15:00:22 escabot PREFETCH window: f8300000-f83fffff
Sep 13 15:00:22 escabot NET: Registered protocol family 2
Sep 13 15:00:22 escabot IP route cache hash table entries: 32768  
(order: 5, 131072 bytes)
Sep 13 15:00:22 escabot TCP established hash table entries: 131072  
(order: 9, 2097152 bytes)
Sep 13 15:00:22 escabot TCP bind hash table entries: 65536 (order: 7,  
786432 bytes)
Sep 13 15:00:22 escabot TCP: Hash tables configured (established  
131072 bind 65536)
Sep 13 15:00:22 escabot TCP reno registered
Sep 13 15:00:22 escabot checking if image is initramfs...it isn't (no  
cpio magic); looks like an initrd
Sep 13 15:00:22 escabot Freeing initrd memory: 689k freed
Sep 13 15:00:22 escabot Machine check exception polling timer started.
Sep 13 15:00:22 escabot audit: initializing netlink socket (disabled)
Sep 13 15:00:22 escabot audit(1189695589.692:1): initialized
Sep 13 15:00:22 escabot highmem bounce pool size: 64 pages
Sep 13 15:00:22 escabot Installing knfsd (copyright (C) 1996  
okir@monad.swb.de).
Sep 13 15:00:22 escabot NTFS driver 2.1.28 [Flags: R/W DEBUG].
Sep 13 15:00:22 escabot SGI XFS with ACLs, large block numbers, no  
debug enabled
Sep 13 15:00:22 escabot SGI XFS Quota Management subsystem
Sep 13 15:00:22 escabot io scheduler noop registered
Sep 13 15:00:22 escabot io scheduler anticipatory registered (default)
Sep 13 15:00:22 escabot io scheduler deadline registered
Sep 13 15:00:22 escabot io scheduler cfq registered
Sep 13 15:00:22 escabot Linux agpgart interface v0.102 (c) Dave Jones
Sep 13 15:00:22 escabot [drm] Initialized drm 1.1.0 20060810
Sep 13 15:00:22 escabot input: Power Button (FF) as /class/input/input0
Sep 13 15:00:22 escabot ACPI: Power Button (FF) [PWRF]
Sep 13 15:00:22 escabot Serial: 8250/16550 driver $Revision: 1.90 $ 4  
ports, IRQ sharing disabled
Sep 13 15:00:22 escabot serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a  
16550A
Sep 13 15:00:22 escabot serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a  
16550A
Sep 13 15:00:22 escabot 00:07: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
Sep 13 15:00:22 escabot Floppy drive(s): fd0 is 1.44M
Sep 13 15:00:22 escabot FDC 0 is a National Semiconductor PC87306
Sep 13 15:00:22 escabot RAMDISK driver initialized: 16 RAM disks of  
4096K size 1024 blocksize
Sep 13 15:00:22 escabot loop: loaded (max 8 devices)
Sep 13 15:00:22 escabot Intel(R) PRO/1000 Network Driver - version  
7.3.20-k2
Sep 13 15:00:22 escabot Copyright (c) 1999-2006 Intel Corporation.
Sep 13 15:00:22 escabot ACPI: PCI Interrupt 0000:05:07.0[A] -> GSI 34  
(level, low) -> IRQ 16
Sep 13 15:00:22 escabot e1000: 0000:05:07.0: e1000_probe: (PCI-X: 
100MHz:64-bit) 00:11:0a:5b:21:98
Sep 13 15:00:22 escabot e1000: eth0: e1000_probe: Intel(R) PRO/1000  
Network Connection
Sep 13 15:00:22 escabot ACPI: PCI Interrupt 0000:05:07.1[B] -> GSI 35  
(level, low) -> IRQ 17
Sep 13 15:00:22 escabot e1000: 0000:05:07.1: e1000_probe: (PCI-X: 
100MHz:64-bit) 00:11:0a:5b:21:99
Sep 13 15:00:22 escabot e1000: eth1: e1000_probe: Intel(R) PRO/1000  
Network Connection
Sep 13 15:00:22 escabot tg3.c:v3.75.2 (June 5, 2007)
Sep 13 15:00:22 escabot ACPI: PCI Interrupt 0000:03:06.0[A] -> GSI 28  
(level, low) -> IRQ 18
Sep 13 15:00:22 escabot eth2: Tigon3 [partno(N/A) rev 2100 PHY(5704)]  
(PCIX:133MHz:64-bit) 10/100/1000Base-T Ethernet 00:16:35:3b:6b:74
Sep 13 15:00:22 escabot eth2: RXcsums[1] LinkChgREG[0] MIirq[0] ASF 
[1] WireSpeed[1] TSOcap[0]
Sep 13 15:00:22 escabot eth2: dma_rwctrl[769f4000] dma_mask[64-bit]
Sep 13 15:00:22 escabot ACPI: PCI Interrupt 0000:03:06.1[B] -> GSI 29  
(level, low) -> IRQ 19
Sep 13 15:00:22 escabot eth3: Tigon3 [partno(N/A) rev 2100 PHY(5704)]  
(PCIX:133MHz:64-bit) 10/100/1000Base-T Ethernet 00:16:35:3b:6b:73
Sep 13 15:00:22 escabot eth3: RXcsums[1] LinkChgREG[0] MIirq[0] ASF 
[0] WireSpeed[1] TSOcap[1]
Sep 13 15:00:22 escabot eth3: dma_rwctrl[769f4000] dma_mask[64-bit]
Sep 13 15:00:22 escabot Uniform Multi-Platform E-IDE driver Revision:  
7.00alpha2
Sep 13 15:00:22 escabot ide: Assuming 33MHz system bus speed for PIO  
modes; override with idebus=xx
Sep 13 15:00:22 escabot Probing IDE interface ide0...
Sep 13 15:00:22 escabot hda: CD-224E, ATAPI CD/DVD-ROM drive
Sep 13 15:00:22 escabot Probing IDE interface ide1...
Sep 13 15:00:22 escabot ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
Sep 13 15:00:22 escabot hda: ATAPI 24X CD-ROM drive, 128kB Cache
Sep 13 15:00:22 escabot Uniform CD-ROM driver Revision: 3.20
Sep 13 15:00:22 escabot megaraid cmm: 2.20.2.7 (Release Date: Sun Jul  
16 00:01:03 EST 2006)
Sep 13 15:00:22 escabot megaraid: 2.20.5.1 (Release Date: Thu Nov 16  
15:32:35 EST 2006)
Sep 13 15:00:22 escabot st: Version 20070203, fixed bufsize 32768, s/ 
g segs 256
Sep 13 15:00:22 escabot SCSI Media Changer driver v0.25
Sep 13 15:00:22 escabot usbmon: debugfs is not available
Sep 13 15:00:22 escabot ohci_hcd: 2006 August 04 USB 1.1 'Open' Host  
Controller (OHCI) Driver
Sep 13 15:00:22 escabot ACPI: PCI Interrupt 0000:01:00.0[D] -> GSI 19  
(level, low) -> IRQ 20
Sep 13 15:00:22 escabot ohci_hcd 0000:01:00.0: OHCI Host Controller
Sep 13 15:00:22 escabot ohci_hcd 0000:01:00.0: new USB bus  
registered, assigned bus number 1
Sep 13 15:00:22 escabot ohci_hcd 0000:01:00.0: irq 20, io mem 0xf7bf0000
Sep 13 15:00:22 escabot usb usb1: configuration #1 chosen from 1 choice
Sep 13 15:00:22 escabot hub 1-0:1.0: USB hub found
Sep 13 15:00:22 escabot hub 1-0:1.0: 3 ports detected
Sep 13 15:00:22 escabot ACPI: PCI Interrupt 0000:01:00.1[D] -> GSI 19  
(level, low) -> IRQ 20
Sep 13 15:00:22 escabot ohci_hcd 0000:01:00.1: OHCI Host Controller
Sep 13 15:00:22 escabot ohci_hcd 0000:01:00.1: new USB bus  
registered, assigned bus number 2
Sep 13 15:00:22 escabot ohci_hcd 0000:01:00.1: irq 20, io mem 0xf7be0000
Sep 13 15:00:22 escabot usb usb2: configuration #1 chosen from 1 choice
Sep 13 15:00:22 escabot hub 2-0:1.0: USB hub found
Sep 13 15:00:22 escabot hub 2-0:1.0: 3 ports detected
Sep 13 15:00:22 escabot usbcore: registered new interface driver usblp
Sep 13 15:00:22 escabot drivers/usb/class/usblp.c: v0.13: USB Printer  
Device Class driver
Sep 13 15:00:22 escabot Initializing USB Mass Storage driver...
Sep 13 15:00:22 escabot usbcore: registered new interface driver usb- 
storage
Sep 13 15:00:22 escabot USB Mass Storage support registered.
Sep 13 15:00:22 escabot usbcore: registered new interface driver usbhid
Sep 13 15:00:22 escabot drivers/usb/input/hid-core.c: v2.6:USB HID  
core driver
Sep 13 15:00:22 escabot PNP: PS/2 Controller  
[PNP0303:KBD,PNP0f0e:PS2M] at 0x60,0x64 irq 1,12
Sep 13 15:00:22 escabot serio: i8042 KBD port at 0x60,0x64 irq 1
Sep 13 15:00:22 escabot serio: i8042 AUX port at 0x60,0x64 irq 12
Sep 13 15:00:22 escabot mice: PS/2 mouse device common for all mice
Sep 13 15:00:22 escabot input: AT Translated Set 2 keyboard as /class/ 
input/input1
Sep 13 15:00:22 escabot oprofile: using NMI interrupt.
Sep 13 15:00:22 escabot TCP cubic registered
Sep 13 15:00:22 escabot NET: Registered protocol family 1
Sep 13 15:00:22 escabot NET: Registered protocol family 17
Sep 13 15:00:22 escabot Starting balanced_irq
Sep 13 15:00:22 escabot Using IPI Shortcut mode
Sep 13 15:00:22 escabot input: ImExPS/2 Logitech Explorer Mouse as / 
class/input/input2
Sep 13 15:00:22 escabot RAMDISK: Compressed image found at block 0
Sep 13 15:00:22 escabot VFS: Mounted root (ext2 filesystem).
Sep 13 15:00:22 escabot QLogic Fibre Channel HBA Driver
Sep 13 15:00:22 escabot ACPI: PCI Interrupt 0000:05:08.0[A] -> GSI 32  
(level, low) -> IRQ 21
Sep 13 15:00:22 escabot qla2300 0000:05:08.0: Found an ISP2312, irq  
21, iobase 0xf8834000
Sep 13 15:00:22 escabot qla2300 0000:05:08.0: Configuring PCI space...
Sep 13 15:00:22 escabot qla2300 0000:05:08.0: Configure NVRAM  
parameters...
Sep 13 15:00:22 escabot qla2300 0000:05:08.0: Verifying loaded RISC  
code...
Sep 13 15:00:22 escabot qla2300 0000:05:08.0: Allocated (412 KB) for  
firmware dump...
Sep 13 15:00:22 escabot qla2300 0000:05:08.0: Waiting for LIP to  
complete...
Sep 13 15:00:22 escabot qla2300 0000:05:08.0: LOOP UP detected (2 Gbps).
Sep 13 15:00:22 escabot qla2300 0000:05:08.0: Topology - (F_Port),  
Host Loop address 0xffff
Sep 13 15:00:22 escabot scsi0 : qla2xxx
Sep 13 15:00:22 escabot qla2300 0000:05:08.0:
Sep 13 15:00:22 escabot QLogic Fibre Channel HBA Driver: 8.01.07-fo
Sep 13 15:00:22 escabot QLogic QLA2340 -
Sep 13 15:00:22 escabot ISP2312: PCI-X (100 MHz) @ 0000:05:08.0 hdma 
+, host#=0, fw=3.03.19 IPX
Sep 13 15:00:22 escabot scsi 0:0:0:0: Direct-Access     SUN       
CSM200_R         0619 PQ: 0 ANSI: 5
Sep 13 15:00:22 escabot qla2300 0000:05:08.0: scsi(0:0:0:0): Enabled  
tagged queuing, queue depth 32.
Sep 13 15:00:22 escabot sd 0:0:0:0: [sda] 104857600 512-byte hardware  
sectors (53687 MB)
Sep 13 15:00:22 escabot sd 0:0:0:0: [sda] Write Protect is off
Sep 13 15:00:22 escabot sd 0:0:0:0: [sda] Mode Sense: 77 00 10 08
Sep 13 15:00:22 escabot sd 0:0:0:0: [sda] Write cache: enabled, read  
cache: enabled, supports DPO and FUA
Sep 13 15:00:22 escabot sd 0:0:0:0: [sda] 104857600 512-byte hardware  
sectors (53687 MB)
Sep 13 15:00:22 escabot sd 0:0:0:0: [sda] Write Protect is off
Sep 13 15:00:22 escabot sd 0:0:0:0: [sda] Mode Sense: 77 00 10 08
Sep 13 15:00:22 escabot sd 0:0:0:0: [sda] Write cache: enabled, read  
cache: enabled, supports DPO and FUA
Sep 13 15:00:22 escabot sda: sda1 sda2 sda3
Sep 13 15:00:22 escabot sd 0:0:0:0: [sda] Attached SCSI disk
Sep 13 15:00:22 escabot sd 0:0:0:0: Attached scsi generic sg0 type 0
Sep 13 15:00:22 escabot scsi 0:0:0:1: Direct-Access     SUN       
CSM200_R         0619 PQ: 0 ANSI: 5
Sep 13 15:00:22 escabot qla2300 0000:05:08.0: scsi(0:0:0:1): Enabled  
tagged queuing, queue depth 32.
Sep 13 15:00:22 escabot sd 0:0:0:1: [sdb] 104857600 512-byte hardware  
sectors (53687 MB)
Sep 13 15:00:22 escabot sd 0:0:0:1: [sdb] Write Protect is off
Sep 13 15:00:22 escabot sd 0:0:0:1: [sdb] Mode Sense: 77 00 10 08
Sep 13 15:00:22 escabot sd 0:0:0:1: [sdb] Write cache: enabled, read  
cache: enabled, supports DPO and FUA
Sep 13 15:00:22 escabot sd 0:0:0:1: [sdb] 104857600 512-byte hardware  
sectors (53687 MB)
Sep 13 15:00:22 escabot sd 0:0:0:1: [sdb] Write Protect is off
Sep 13 15:00:22 escabot sd 0:0:0:1: [sdb] Mode Sense: 77 00 10 08
Sep 13 15:00:22 escabot sd 0:0:0:1: [sdb] Write cache: enabled, read  
cache: enabled, supports DPO and FUA
Sep 13 15:00:22 escabot sdb: sdb1
Sep 13 15:00:22 escabot sd 0:0:0:1: [sdb] Attached SCSI disk
Sep 13 15:00:22 escabot sd 0:0:0:1: Attached scsi generic sg1 type 0
Sep 13 15:00:22 escabot ACPI: PCI Interrupt 0000:06:09.0[A] -> GSI 36  
(level, low) -> IRQ 22
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: Found an ISP2312, irq  
22, iobase 0xf8836000
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: Configuring PCI space...
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: Configure NVRAM  
parameters...
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: Verifying loaded RISC  
code...
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: Allocated (412 KB) for  
firmware dump...
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: Waiting for LIP to  
complete...
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: LIP reset occured (f8f7).
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: LIP occured (f8f7).
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: LIP reset occured (f7f7).
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: LOOP UP detected (2 Gbps).
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: Topology - (F_Port),  
Host Loop address 0xffff
Sep 13 15:00:22 escabot scsi1 : qla2xxx
Sep 13 15:00:22 escabot qla2300 0000:06:09.0:
Sep 13 15:00:22 escabot QLogic Fibre Channel HBA Driver: 8.01.07-fo
Sep 13 15:00:22 escabot QLogic QLA2340 -
Sep 13 15:00:22 escabot ISP2312: PCI-X (133 MHz) @ 0000:06:09.0 hdma 
+, host#=1, fw=3.03.19 IPX
Sep 13 15:00:22 escabot scsi 1:0:0:0: Sequential-Access IBM       
ULTRIUM-TD3      54K1 PQ: 0 ANSI: 3
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: scsi(1:0:0:0): Enabled  
tagged queuing, queue depth 32.
Sep 13 15:00:22 escabot st 1:0:0:0: Attached scsi tape st0
Sep 13 15:00:22 escabot st 1:0:0:0: st0: try direct i/o: yes  
(alignment 512 B)
Sep 13 15:00:22 escabot st 1:0:0:0: Attached scsi generic sg2 type 1
Sep 13 15:00:22 escabot scsi 1:0:0:1: Medium Changer    SPECTRA   
PYTHON           2000 PQ: 0 ANSI: 3
Sep 13 15:00:22 escabot ch0: type #1 (mt): 0x1+1 [medium transport]
Sep 13 15:00:22 escabot ch0: type #2 (st): 0x1000+30 [storage]
Sep 13 15:00:22 escabot ch0: type #3 (ie): 0x10+0 [import/export]
Sep 13 15:00:22 escabot ch0: type #4 (dt): 0x100+2 [data transfer]
Sep 13 15:00:22 escabot ch0: dt 0x100: ID/LUN unknown
Sep 13 15:00:22 escabot ch0: dt 0x101: ID/LUN unknown
Sep 13 15:00:22 escabot ch0: INITIALIZE ELEMENT STATUS, may take some  
time ...
Sep 13 15:00:22 escabot ch0: ... finished
Sep 13 15:00:22 escabot ch 1:0:0:1: Attached scsi changer ch0
Sep 13 15:00:22 escabot ch 1:0:0:1: Attached scsi generic sg3 type 8
Sep 13 15:00:22 escabot scsi 1:0:1:0: Sequential-Access IBM       
ULTRIUM-TD3      54K1 PQ: 0 ANSI: 3
Sep 13 15:00:22 escabot qla2300 0000:06:09.0: scsi(1:0:1:0): Enabled  
tagged queuing, queue depth 32.
Sep 13 15:00:22 escabot st 1:0:1:0: Attached scsi tape st1
Sep 13 15:00:22 escabot st 1:0:1:0: st1: try direct i/o: yes  
(alignment 512 B)
Sep 13 15:00:22 escabot st 1:0:1:0: Attached scsi generic sg4 type 1
Sep 13 15:00:22 escabot kjournald starting.  Commit interval 5 seconds
Sep 13 15:00:22 escabot EXT3-fs: mounted filesystem with ordered data  
mode.
Sep 13 15:00:22 escabot Freeing unused kernel memory: 240k freed
Sep 13 15:00:22 escabot EXT3 FS on sda3, internal journal
Sep 13 15:00:22 escabot kjournald starting.  Commit interval 5 seconds
Sep 13 15:00:22 escabot EXT3 FS on sdb1, internal journal
Sep 13 15:00:22 escabot EXT3-fs: mounted filesystem with ordered data  
mode.
Sep 13 15:00:22 escabot Adding 2008116k swap on /dev/sda2.   
Priority:-1 extents:1 across:2008116k
Sep 13 15:00:22 escabot PM: Writing back config space on device  
0000:03:06.0 at offset b (was 164814e4, writing d00e11)
Sep 13 15:00:22 escabot PM: Writing back config space on device  
0000:03:06.0 at offset 3 (was 804000, writing 804010)
Sep 13 15:00:22 escabot PM: Writing back config space on device  
0000:03:06.0 at offset 2 (was 2000000, writing 2000010)
Sep 13 15:00:22 escabot PM: Writing back config space on device  
0000:03:06.0 at offset 1 (was 2b00000, writing 2b00146)
Sep 13 15:00:22 escabot tg3: eth2: Link is up at 1000 Mbps, full duplex.
Sep 13 15:00:22 escabot tg3: eth2: Flow control is on for TX and on  
for RX.
Sep 13 15:00:22 escabot PM: Writing back config space on device  
0000:03:06.1 at offset b (was 164814e4, writing d00e11)
Sep 13 15:00:22 escabot PM: Writing back config space on device  
0000:03:06.1 at offset 3 (was 804000, writing 804010)
Sep 13 15:00:22 escabot PM: Writing back config space on device  
0000:03:06.1 at offset 2 (was 2000000, writing 2000010)
Sep 13 15:00:22 escabot PM: Writing back config space on device  
0000:03:06.1 at offset 1 (was 2b00000, writing 2b00146)
Sep 13 15:00:25 escabot st0: Block limits 1 - 16777215 bytes.
Sep 13 15:00:25 escabot st1: Block limits 1 - 16777215 bytes.
Sep 13 15:00:25 escabot tg3: eth3: Link is up at 1000 Mbps, full duplex.
Sep 13 15:00:25 escabot tg3: eth3: Flow control is on for TX and on  
for RX.
Sep 13 15:04:16 escabot scsi 0:0:0:2: Direct-Access     SUN       
CSM200_R         0619 PQ: 0 ANSI: 5
Sep 13 15:04:16 escabot qla2300 0000:05:08.0: scsi(0:0:0:2): Enabled  
tagged queuing, queue depth 32.
Sep 13 15:04:16 escabot sd 0:0:0:2: [sdc] 1048576000 512-byte  
hardware sectors (536871 MB)
Sep 13 15:04:16 escabot sd 0:0:0:2: [sdc] Write Protect is off
Sep 13 15:04:16 escabot sd 0:0:0:2: [sdc] Mode Sense: 77 00 10 08
Sep 13 15:04:16 escabot sd 0:0:0:2: [sdc] Write cache: enabled, read  
cache: enabled, supports DPO and FUA
Sep 13 15:04:16 escabot sd 0:0:0:2: [sdc] 1048576000 512-byte  
hardware sectors (536871 MB)
Sep 13 15:04:16 escabot sd 0:0:0:2: [sdc] Write Protect is off
Sep 13 15:04:16 escabot sd 0:0:0:2: [sdc] Mode Sense: 77 00 10 08
Sep 13 15:04:16 escabot sd 0:0:0:2: [sdc] Write cache: enabled, read  
cache: enabled, supports DPO and FUA
Sep 13 15:04:19 escabot sdc: sdc1
Sep 13 15:04:19 escabot sd 0:0:0:2: [sdc] Attached SCSI disk
Sep 13 15:04:19 escabot sd 0:0:0:2: Attached scsi generic sg5 type 0
Sep 13 15:04:24 escabot sd 0:0:0:2: [sdc] 1048576000 512-byte  
hardware sectors (536871 MB)
Sep 13 15:04:24 escabot sd 0:0:0:2: [sdc] Write Protect is off
Sep 13 15:04:24 escabot sd 0:0:0:2: [sdc] Mode Sense: 77 00 10 08
Sep 13 15:04:24 escabot sd 0:0:0:2: [sdc] Write cache: enabled, read  
cache: enabled, supports DPO and FUA
Sep 13 15:04:24 escabot sdc: sdc1
Sep 13 15:04:29 escabot NTFS volume version 3.1.
Sep 13 15:31:25 escabot init invoked oom-killer: gfp_mask=0xd0,  
order=0, oomkilladj=0
Sep 13 15:31:25 escabot [<c0149ea8>] out_of_memory+0x158/0x1b0
Sep 13 15:31:25 escabot [<c014b7b0>] __alloc_pages+0x2a0/0x300
Sep 13 15:31:25 escabot [<c01729aa>] core_sys_select+0x1fa/0x2f0
Sep 13 15:31:25 escabot [<c0162a21>] cache_alloc_refill+0x2e1/0x500
Sep 13 15:31:25 escabot [<c0162736>] kmem_cache_alloc+0x46/0x50
Sep 13 15:31:25 escabot [<c016e238>] getname+0x28/0xf0
Sep 13 15:31:25 escabot [<c016fe5e>] __user_walk_fd+0x1e/0x60
Sep 13 15:31:25 escabot [<c0168e29>] cp_new_stat64+0xf9/0x110
Sep 13 15:31:25 escabot [<c0169152>] vfs_stat_fd+0x22/0x60
Sep 13 15:31:25 escabot [<c016922f>] sys_stat64+0xf/0x30
Sep 13 15:31:25 escabot [<c0124956>] do_gettimeofday+0x36/0xf0
Sep 13 15:31:25 escabot [<c0120bbf>] sys_time+0xf/0x30
Sep 13 15:31:25 escabot [<c0102b10>] sysenter_past_esp+0x5d/0x81
Sep 13 15:31:25 escabot =======================
Sep 13 15:31:25 escabot Mem-info:
Sep 13 15:31:25 escabot DMA per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    1: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    2: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    3: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot Normal per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:  186, btch:  31 usd: 143    
Cold: hi:   62, btch:  15 usd:  56
Sep 13 15:31:25 escabot CPU    1: Hot: hi:  186, btch:  31 usd:  31    
Cold: hi:   62, btch:  15 usd:  53
Sep 13 15:31:25 escabot CPU    2: Hot: hi:  186, btch:  31 usd:   8    
Cold: hi:   62, btch:  15 usd:  54
Sep 13 15:31:25 escabot CPU    3: Hot: hi:  186, btch:  31 usd:  99    
Cold: hi:   62, btch:  15 usd:  58
Sep 13 15:31:25 escabot HighMem per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:  186, btch:  31 usd:   1    
Cold: hi:   62, btch:  15 usd:  10
Sep 13 15:31:25 escabot CPU    1: Hot: hi:  186, btch:  31 usd:  23    
Cold: hi:   62, btch:  15 usd:   1
Sep 13 15:31:25 escabot CPU    2: Hot: hi:  186, btch:  31 usd:  18    
Cold: hi:   62, btch:  15 usd:   1
Sep 13 15:31:25 escabot CPU    3: Hot: hi:  186, btch:  31 usd:   9    
Cold: hi:   62, btch:  15 usd:  12
Sep 13 15:31:25 escabot Active:8078 inactive:1776744 dirty:10338  
writeback:0 unstable:0
Sep 13 15:31:25 escabot free:1090395 slab:198893 mapped:988  
pagetables:129 bounce:0
Sep 13 15:31:25 escabot DMA free:3560kB min:68kB low:84kB high:100kB  
active:0kB inactive:0kB present:16256kB pages_scanned:0  
all_unreclaimable? yes
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 873 12176
Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB high: 
5616kB active:0kB inactive:3160kB present:894080kB pages_scanned:5336  
all_unreclaimable? yes
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 0 90424
Sep 13 15:31:25 escabot HighMem free:4354372kB min:512kB low:12640kB  
high:24768kB active:32312kB inactive:7103816kB present:11574272kB  
pages_scanned:0 all_unreclaimable? no
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 0 0
Sep 13 15:31:25 escabot DMA: 4*4kB 3*8kB 0*16kB 0*32kB 1*64kB 1*128kB  
1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3560kB
Sep 13 15:31:25 escabot Normal: 44*4kB 18*8kB 6*16kB 1*32kB 6*64kB  
2*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 3648kB
Sep 13 15:31:25 escabot HighMem: 1*4kB 0*8kB 0*16kB 37914*32kB  
43188*64kB 1628*128kB 5*256kB 247*512kB 30*1024kB 1*2048kB 2*4096kB =  
4354372kB
Sep 13 15:31:25 escabot Swap cache: add 983, delete 972, find 0/1,  
race 0+0
Sep 13 15:31:25 escabot Free swap  = 2004216kB
Sep 13 15:31:25 escabot Total swap = 2008116kB
Sep 13 15:31:25 escabot Free swap:       2004216kB
Sep 13 15:31:25 escabot syslog-ng invoked oom-killer: gfp_mask=0xd0,  
order=0, oomkilladj=0
Sep 13 15:31:25 escabot [<c0149ea8>] out_of_memory+0x158/0x1b0
Sep 13 15:31:25 escabot [<c014b7b0>] __alloc_pages+0x2a0/0x300
Sep 13 15:31:25 escabot [<c0162a21>] cache_alloc_refill+0x2e1/0x500
Sep 13 15:31:25 escabot [<c0162736>] kmem_cache_alloc+0x46/0x50
Sep 13 15:31:25 escabot [<c016e238>] getname+0x28/0xf0
Sep 13 15:31:25 escabot [<c016fe5e>] __user_walk_fd+0x1e/0x60
Sep 13 15:31:25 escabot [<c0169152>] vfs_stat_fd+0x22/0x60
Sep 13 15:31:25 escabot [<c012f460>] autoremove_wake_function+0x0/0x50
Sep 13 15:31:25 escabot [<c0123928>] __capable+0x8/0x20
Sep 13 15:31:25 escabot [<c02d02ef>] cap_syslog+0x1f/0x30
Sep 13 15:31:25 escabot [<c0123928>] __capable+0x8/0x20
Sep 13 15:31:25 escabot [<c011d026>] do_syslog+0x3b6/0x3e0
Sep 13 15:31:25 escabot [<c016922f>] sys_stat64+0xf/0x30
Sep 13 15:31:25 escabot [<c0124956>] do_gettimeofday+0x36/0xf0
Sep 13 15:31:25 escabot [<c0120bbf>] sys_time+0xf/0x30
Sep 13 15:31:25 escabot [<c0102b10>] sysenter_past_esp+0x5d/0x81
Sep 13 15:31:25 escabot =======================
Sep 13 15:31:25 escabot Mem-info:
Sep 13 15:31:25 escabot DMA per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    1: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    2: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    3: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot Normal per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:  186, btch:  31 usd: 143    
Cold: hi:   62, btch:  15 usd:  56
Sep 13 15:31:25 escabot CPU    1: Hot: hi:  186, btch:  31 usd:  31    
Cold: hi:   62, btch:  15 usd:  53
Sep 13 15:31:25 escabot CPU    2: Hot: hi:  186, btch:  31 usd:   8    
Cold: hi:   62, btch:  15 usd:  54
Sep 13 15:31:25 escabot CPU    3: Hot: hi:  186, btch:  31 usd:  99    
Cold: hi:   62, btch:  15 usd:  58
Sep 13 15:31:25 escabot HighMem per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:  186, btch:  31 usd:   1    
Cold: hi:   62, btch:  15 usd:  10
Sep 13 15:31:25 escabot CPU    1: Hot: hi:  186, btch:  31 usd:  23    
Cold: hi:   62, btch:  15 usd:   1
Sep 13 15:31:25 escabot CPU    2: Hot: hi:  186, btch:  31 usd:  18    
Cold: hi:   62, btch:  15 usd:   1
Sep 13 15:31:25 escabot CPU    3: Hot: hi:  186, btch:  31 usd:   9    
Cold: hi:   62, btch:  15 usd:  12
Sep 13 15:31:25 escabot Active:8078 inactive:1776744 dirty:10338  
writeback:0 unstable:0
Sep 13 15:31:25 escabot free:1090395 slab:198893 mapped:988  
pagetables:129 bounce:0
Sep 13 15:31:25 escabot DMA free:3560kB min:68kB low:84kB high:100kB  
active:0kB inactive:0kB present:16256kB pages_scanned:0  
all_unreclaimable? yes
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 873 12176
Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB high: 
5616kB active:0kB inactive:3160kB present:894080kB pages_scanned:5336  
all_unreclaimable? yes
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 0 90424
Sep 13 15:31:25 escabot HighMem free:4354372kB min:512kB low:12640kB  
high:24768kB active:32312kB inactive:7103816kB present:11574272kB  
pages_scanned:0 all_unreclaimable? no
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 0 0
Sep 13 15:31:25 escabot DMA: 4*4kB 3*8kB 0*16kB 0*32kB 1*64kB 1*128kB  
1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3560kB
Sep 13 15:31:25 escabot Normal: 44*4kB 18*8kB 6*16kB 1*32kB 6*64kB  
2*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 3648kB
Sep 13 15:31:25 escabot HighMem: 1*4kB 0*8kB 0*16kB 37914*32kB  
43188*64kB 1628*128kB 5*256kB 247*512kB 30*1024kB 1*2048kB 2*4096kB =  
4354372kB
Sep 13 15:31:25 escabot Swap cache: add 983, delete 972, find 0/1,  
race 0+0
Sep 13 15:31:25 escabot Free swap  = 2004216kB
Sep 13 15:31:25 escabot Total swap = 2008116kB
Sep 13 15:31:25 escabot Free swap:       2004216kB
Sep 13 15:31:25 escabot 3145727 pages of RAM
Sep 13 15:31:25 escabot 2916351 pages of HIGHMEM
Sep 13 15:31:25 escabot 69112 reserved pages
Sep 13 15:31:25 escabot 1781351 pages shared
Sep 13 15:31:25 escabot 11 pages swap cached
Sep 13 15:31:25 escabot 10338 pages dirty
Sep 13 15:31:25 escabot 0 pages writeback
Sep 13 15:31:25 escabot 988 pages mapped
Sep 13 15:31:25 escabot 198893 pages slab
Sep 13 15:31:25 escabot 129 pages pagetables
Sep 13 15:31:25 escabot Out of memory: kill process 5771 (mysqld)  
score 35692 or a child
Sep 13 15:31:25 escabot Killed process 5771 (mysqld)
Sep 13 15:31:25 escabot init invoked oom-killer: gfp_mask=0xd0,  
order=0, oomkilladj=0
Sep 13 15:31:25 escabot [<c0149ea8>] out_of_memory+0x158/0x1b0
Sep 13 15:31:25 escabot [<c014b7b0>] __alloc_pages+0x2a0/0x300
Sep 13 15:31:25 escabot [<c01729aa>] core_sys_select+0x1fa/0x2f0
Sep 13 15:31:25 escabot [<c0162a21>] cache_alloc_refill+0x2e1/0x500
Sep 13 15:31:25 escabot [<c0162736>] kmem_cache_alloc+0x46/0x50
Sep 13 15:31:25 escabot [<c016e238>] getname+0x28/0xf0
Sep 13 15:31:25 escabot [<c016fe5e>] __user_walk_fd+0x1e/0x60
Sep 13 15:31:25 escabot [<c0168e29>] cp_new_stat64+0xf9/0x110
Sep 13 15:31:25 escabot [<c0169152>] vfs_stat_fd+0x22/0x60
Sep 13 15:31:25 escabot [<c016922f>] sys_stat64+0xf/0x30
Sep 13 15:31:25 escabot [<c0124956>] do_gettimeofday+0x36/0xf0
Sep 13 15:31:25 escabot [<c0120bbf>] sys_time+0xf/0x30
Sep 13 15:31:25 escabot [<c0102b10>] sysenter_past_esp+0x5d/0x81
Sep 13 15:31:25 escabot =======================
Sep 13 15:31:25 escabot Mem-info:
Sep 13 15:31:25 escabot DMA per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    1: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    2: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    3: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot Normal per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:  186, btch:  31 usd: 143    
Cold: hi:   62, btch:  15 usd:  56
Sep 13 15:31:25 escabot CPU    1: Hot: hi:  186, btch:  31 usd:  31    
Cold: hi:   62, btch:  15 usd:  53
Sep 13 15:31:25 escabot CPU    2: Hot: hi:  186, btch:  31 usd:   8    
Cold: hi:   62, btch:  15 usd:  54
Sep 13 15:31:25 escabot CPU    3: Hot: hi:  186, btch:  31 usd:  99    
Cold: hi:   62, btch:  15 usd:  58
Sep 13 15:31:25 escabot HighMem per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:  186, btch:  31 usd:   1    
Cold: hi:   62, btch:  15 usd:  10
Sep 13 15:31:25 escabot CPU    1: Hot: hi:  186, btch:  31 usd:  23    
Cold: hi:   62, btch:  15 usd:   1
Sep 13 15:31:25 escabot CPU    2: Hot: hi:  186, btch:  31 usd:  18    
Cold: hi:   62, btch:  15 usd:   1
Sep 13 15:31:25 escabot CPU    3: Hot: hi:  186, btch:  31 usd:   9    
Cold: hi:   62, btch:  15 usd:  12
Sep 13 15:31:25 escabot Active:8078 inactive:1776744 dirty:10338  
writeback:0 unstable:0
Sep 13 15:31:25 escabot free:1090395 slab:198893 mapped:988  
pagetables:129 bounce:0
Sep 13 15:31:25 escabot DMA free:3560kB min:68kB low:84kB high:100kB  
active:0kB inactive:0kB present:16256kB pages_scanned:0  
all_unreclaimable? yes
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 873 12176
Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB high: 
5616kB active:0kB inactive:3160kB present:894080kB pages_scanned:5336  
all_unreclaimable? yes
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 0 90424
Sep 13 15:31:25 escabot HighMem free:4354372kB min:512kB low:12640kB  
high:24768kB active:32312kB inactive:7103816kB present:11574272kB  
pages_scanned:0 all_unreclaimable? no
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 0 0
Sep 13 15:31:25 escabot DMA: 4*4kB 3*8kB 0*16kB 0*32kB 1*64kB 1*128kB  
1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3560kB
Sep 13 15:31:25 escabot Normal: 44*4kB 18*8kB 6*16kB 1*32kB 6*64kB  
2*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 3648kB
Sep 13 15:31:25 escabot HighMem: 1*4kB 0*8kB 0*16kB 37914*32kB  
43188*64kB 1628*128kB 5*256kB 247*512kB 30*1024kB 1*2048kB 2*4096kB =  
4354372kB
Sep 13 15:31:25 escabot Swap cache: add 983, delete 972, find 0/1,  
race 0+0
Sep 13 15:31:25 escabot Free swap  = 2004216kB
Sep 13 15:31:25 escabot Total swap = 2008116kB
Sep 13 15:31:25 escabot Free swap:       2004216kB
Sep 13 15:31:25 escabot 3145727 pages of RAM
Sep 13 15:31:25 escabot 2916351 pages of HIGHMEM
Sep 13 15:31:25 escabot 69112 reserved pages
Sep 13 15:31:25 escabot 1781351 pages shared
Sep 13 15:31:25 escabot 11 pages swap cached
Sep 13 15:31:25 escabot 10338 pages dirty
Sep 13 15:31:25 escabot 0 pages writeback
Sep 13 15:31:25 escabot 988 pages mapped
Sep 13 15:31:25 escabot 198893 pages slab
Sep 13 15:31:25 escabot 129 pages pagetables
Sep 13 15:31:25 escabot Out of memory: kill process 5789 (mysqld)  
score 35692 or a child
Sep 13 15:31:25 escabot Killed process 5789 (mysqld)
Sep 13 15:31:25 escabot syslog-ng invoked oom-killer: gfp_mask=0xd0,  
order=0, oomkilladj=0
Sep 13 15:31:25 escabot [<c0149ea8>] out_of_memory+0x158/0x1b0
Sep 13 15:31:25 escabot [<c014b7b0>] __alloc_pages+0x2a0/0x300
Sep 13 15:31:25 escabot [<c0162a21>] cache_alloc_refill+0x2e1/0x500
Sep 13 15:31:25 escabot [<c0162736>] kmem_cache_alloc+0x46/0x50
Sep 13 15:31:25 escabot [<c016e238>] getname+0x28/0xf0
Sep 13 15:31:25 escabot [<c016fe5e>] __user_walk_fd+0x1e/0x60
Sep 13 15:31:25 escabot [<c0169152>] vfs_stat_fd+0x22/0x60
Sep 13 15:31:25 escabot [<c012f460>] autoremove_wake_function+0x0/0x50
Sep 13 15:31:25 escabot [<c0123928>] __capable+0x8/0x20
Sep 13 15:31:25 escabot [<c02d02ef>] cap_syslog+0x1f/0x30
Sep 13 15:31:25 escabot [<c0123928>] __capable+0x8/0x20
Sep 13 15:31:25 escabot [<c011d026>] do_syslog+0x3b6/0x3e0
Sep 13 15:31:25 escabot [<c016922f>] sys_stat64+0xf/0x30
Sep 13 15:31:25 escabot [<c0124956>] do_gettimeofday+0x36/0xf0
Sep 13 15:31:25 escabot [<c0120bbf>] sys_time+0xf/0x30
Sep 13 15:31:25 escabot [<c0102b10>] sysenter_past_esp+0x5d/0x81
Sep 13 15:31:25 escabot =======================
Sep 13 15:31:25 escabot Mem-info:
Sep 13 15:31:25 escabot DMA per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    1: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    2: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    3: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot Normal per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:  186, btch:  31 usd: 143    
Cold: hi:   62, btch:  15 usd:  56
Sep 13 15:31:25 escabot CPU    1: Hot: hi:  186, btch:  31 usd:  31    
Cold: hi:   62, btch:  15 usd:  53
Sep 13 15:31:25 escabot CPU    2: Hot: hi:  186, btch:  31 usd:   8    
Cold: hi:   62, btch:  15 usd:  54
Sep 13 15:31:25 escabot CPU    3: Hot: hi:  186, btch:  31 usd: 130    
Cold: hi:   62, btch:  15 usd:  58
Sep 13 15:31:25 escabot HighMem per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:  186, btch:  31 usd:   1    
Cold: hi:   62, btch:  15 usd:  10
Sep 13 15:31:25 escabot CPU    1: Hot: hi:  186, btch:  31 usd:  23    
Cold: hi:   62, btch:  15 usd:   1
Sep 13 15:31:25 escabot CPU    2: Hot: hi:  186, btch:  31 usd:  18    
Cold: hi:   62, btch:  15 usd:   1
Sep 13 15:31:25 escabot CPU    3: Hot: hi:  186, btch:  31 usd: 162    
Cold: hi:   62, btch:  15 usd:  12
Sep 13 15:31:25 escabot Active:2311 inactive:1776744 dirty:10338  
writeback:0 unstable:0
Sep 13 15:31:25 escabot free:1096030 slab:198893 mapped:915  
pagetables:92 bounce:0
Sep 13 15:31:25 escabot DMA free:3560kB min:68kB low:84kB high:100kB  
active:0kB inactive:0kB present:16256kB pages_scanned:0  
all_unreclaimable? yes
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 873 12176
Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB high: 
5616kB active:0kB inactive:3160kB present:894080kB pages_scanned:5336  
all_unreclaimable? yes
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 0 90424
Sep 13 15:31:25 escabot HighMem free:4376912kB min:512kB low:12640kB  
high:24768kB active:9244kB inactive:7103816kB present:11574272kB  
pages_scanned:0 all_unreclaimable? no
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 0 0
Sep 13 15:31:25 escabot DMA: 4*4kB 3*8kB 0*16kB 0*32kB 1*64kB 1*128kB  
1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3560kB
Sep 13 15:31:25 escabot Normal: 44*4kB 18*8kB 6*16kB 1*32kB 6*64kB  
2*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 3648kB
Sep 13 15:31:25 escabot HighMem: 107*4kB 42*8kB 39*16kB 37944*32kB  
43216*64kB 1640*128kB 15*256kB 253*512kB 35*1024kB 2*2048kB 3*4096kB  
= 4376940kB
Sep 13 15:31:25 escabot Swap cache: add 983, delete 972, find 0/1,  
race 0+0
Sep 13 15:31:25 escabot Free swap  = 2004216kB
Sep 13 15:31:25 escabot Total swap = 2008116kB
Sep 13 15:31:25 escabot Free swap:       2004216kB
Sep 13 15:31:25 escabot 3145727 pages of RAM
Sep 13 15:31:25 escabot 2916351 pages of HIGHMEM
Sep 13 15:31:25 escabot 69112 reserved pages
Sep 13 15:31:25 escabot 1780992 pages shared
Sep 13 15:31:25 escabot 11 pages swap cached
Sep 13 15:31:25 escabot 10338 pages dirty
Sep 13 15:31:25 escabot 0 pages writeback
Sep 13 15:31:25 escabot 915 pages mapped
Sep 13 15:31:25 escabot 198893 pages slab
Sep 13 15:31:25 escabot 92 pages pagetables
Sep 13 15:31:25 escabot Out of memory: kill process 5857 (bacula-dir)  
score 379 or a child
Sep 13 15:31:25 escabot Killed process 5857 (bacula-dir)
Sep 13 15:31:25 escabot init invoked oom-killer: gfp_mask=0xd0,  
order=0, oomkilladj=0
Sep 13 15:31:25 escabot [<c0149ea8>] out_of_memory+0x158/0x1b0
Sep 13 15:31:25 escabot [<c014b7b0>] __alloc_pages+0x2a0/0x300
Sep 13 15:31:25 escabot [<c01729aa>] core_sys_select+0x1fa/0x2f0
Sep 13 15:31:25 escabot [<c0162a21>] cache_alloc_refill+0x2e1/0x500
Sep 13 15:31:25 escabot [<c0162736>] kmem_cache_alloc+0x46/0x50
Sep 13 15:31:25 escabot [<c016e238>] getname+0x28/0xf0
Sep 13 15:31:25 escabot [<c016fe5e>] __user_walk_fd+0x1e/0x60
Sep 13 15:31:25 escabot [<c0168e29>] cp_new_stat64+0xf9/0x110
Sep 13 15:31:25 escabot [<c0169152>] vfs_stat_fd+0x22/0x60
Sep 13 15:31:25 escabot [<c016922f>] sys_stat64+0xf/0x30
Sep 13 15:31:25 escabot [<c0124956>] do_gettimeofday+0x36/0xf0
Sep 13 15:31:25 escabot [<c0120bbf>] sys_time+0xf/0x30
Sep 13 15:31:25 escabot [<c0102b10>] sysenter_past_esp+0x5d/0x81
Sep 13 15:31:25 escabot =======================
Sep 13 15:31:25 escabot Mem-info:
Sep 13 15:31:25 escabot DMA per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    1: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    2: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot CPU    3: Hot: hi:    0, btch:   1 usd:   0    
Cold: hi:    0, btch:   1 usd:   0
Sep 13 15:31:25 escabot Normal per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:  186, btch:  31 usd: 151    
Cold: hi:   62, btch:  15 usd:  56
Sep 13 15:31:25 escabot CPU    1: Hot: hi:  186, btch:  31 usd:  31    
Cold: hi:   62, btch:  15 usd:  53
Sep 13 15:31:25 escabot CPU    2: Hot: hi:  186, btch:  31 usd:   8    
Cold: hi:   62, btch:  15 usd:  54
Sep 13 15:31:25 escabot CPU    3: Hot: hi:  186, btch:  31 usd: 130    
Cold: hi:   62, btch:  15 usd:  58
Sep 13 15:31:25 escabot HighMem per-cpu:
Sep 13 15:31:25 escabot CPU    0: Hot: hi:  186, btch:  31 usd:  63    
Cold: hi:   62, btch:  15 usd:  10
Sep 13 15:31:25 escabot CPU    1: Hot: hi:  186, btch:  31 usd:  23    
Cold: hi:   62, btch:  15 usd:   1
Sep 13 15:31:25 escabot CPU    2: Hot: hi:  186, btch:  31 usd:  18    
Cold: hi:   62, btch:  15 usd:   1
Sep 13 15:31:25 escabot CPU    3: Hot: hi:  186, btch:  31 usd: 162    
Cold: hi:   62, btch:  15 usd:  12
Sep 13 15:31:25 escabot Active:2238 inactive:1776744 dirty:10338  
writeback:0 unstable:0
Sep 13 15:31:25 escabot free:1096030 slab:198893 mapped:842  
pagetables:92 bounce:0
Sep 13 15:31:25 escabot DMA free:3560kB min:68kB low:84kB high:100kB  
active:0kB inactive:0kB present:16256kB pages_scanned:0  
all_unreclaimable? yes
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 873 12176
Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB high: 
5616kB active:0kB inactive:3160kB present:894080kB pages_scanned:5336  
all_unreclaimable? yes
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 0 90424
Sep 13 15:31:25 escabot HighMem free:4376912kB min:512kB low:12640kB  
high:24768kB active:8952kB inactive:7103816kB present:11574272kB  
pages_scanned:0 all_unreclaimable? no
Sep 13 15:31:25 escabot lowmem_reserve[]: 0 0 0
Sep 13 15:31:25 escabot DMA: 4*4kB 3*8kB 0*16kB 0*32kB 1*64kB 1*128kB  
1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3560kB
Sep 13 15:31:25 escabot Normal: 44*4kB 18*8kB 6*16kB 1*32kB 6*64kB  
2*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 3648kB
Sep 13 15:31:25 escabot HighMem: 107*4kB 42*8kB 39*16kB 37944*32kB  
43216*64kB 1640*128kB 15*256kB 253*512kB 35*1024kB 2*2048kB 3*4096kB  
= 4376940kB
Sep 13 15:31:25 escabot Swap cache: add 983, delete 972, find 0/1,  
race 0+0
Sep 13 15:31:25 escabot Free swap  = 2004644kB
Sep 13 15:31:25 escabot Total swap = 2008116kB
Sep 13 15:31:25 escabot Free swap:       2004644kB
Sep 13 15:31:25 escabot 3145727 pages of RAM
Sep 13 15:31:25 escabot 2916351 pages of HIGHMEM
Sep 13 15:31:25 escabot 69112 reserved pages
Sep 13 15:31:25 escabot 1780742 pages shared
Sep 13 15:31:25 escabot 11 pages swap cached
Sep 13 15:31:25 escabot 10338 pages dirty
Sep 13 15:31:25 escabot 0 pages writeback
Sep 13 15:31:25 escabot 842 pages mapped
Sep 13 15:31:25 escabot 198893 pages slab
Sep 13 15:31:25 escabot 92 pages pagetables
Sep 13 15:31:25 escabot Out of memory: kill process 5868 (bacula-sd)  
score 350 or a child
Sep 13 15:31:25 escabot Killed process 5868 (bacula-sd)
Sep 13 15:31:25 escabot 3145727 pages of RAM
Sep 13 15:31:25 escabot 2916351 pages of HIGHMEM
Sep 13 15:31:25 escabot 69112 reserved pages
Sep 13 15:31:25 escabot 1780407 pages shared
Sep 13 15:31:25 escabot 11 pages swap cached
Sep 13 15:31:25 escabot 10338 pages dirty
Sep 13 15:31:25 escabot 0 pages writeback
Sep 13 15:31:25 escabot 842 pages mapped
Sep 13 15:31:25 escabot 198893 pages slab
Sep 13 15:31:25 escabot 92 pages pagetables
Sep 13 15:31:25 escabot Out of memory: kill process 6032 (sshd) score  
223 or a child
Sep 13 15:31:25 escabot Killed process 6180 (sshd)
</quote>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-15  7:27 VM/VFS bug with large amount of memory and file systems? Anton Altaparmakov
@ 2007-09-15 10:08 ` Peter Zijlstra
  2007-09-15 10:50   ` Anton Altaparmakov
  2007-09-15 10:52   ` Andrew Morton
  2007-10-03 10:51 ` Pavel Machek
  1 sibling, 2 replies; 24+ messages in thread
From: Peter Zijlstra @ 2007-09-15 10:08 UTC (permalink / raw)
  To: Anton Altaparmakov
  Cc: Linux Kernel Mailing List, Linux Memory Management List,
	marc.smith

On Sat, 2007-09-15 at 08:27 +0100, Anton Altaparmakov wrote:

Please, don't word wrap log-files, they're hard enough to read without
it :-(

( I see people do this more and more often, *WHY*? is that because we
like 80 char lines, in code and email? )


Anyway, looks like all of zone_normal is pinned in kernel allocations:

> Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB high: 5616kB active:0kB inactive:3160kB present:894080kB pages_scanned:5336 all_unreclaimable? yes

Out of the 870 odd mb only 3 is on the lru.

Would be grand it you could have a look at slabinfo and the like.

Peter

---

> Sep 13 15:31:25 escabot init invoked oom-killer: gfp_mask=0xd0,  
> order=0, oomkilladj=0
> Sep 13 15:31:25 escabot [<c0149ea8>] out_of_memory+0x158/0x1b0
> Sep 13 15:31:25 escabot [<c014b7b0>] __alloc_pages+0x2a0/0x300
> Sep 13 15:31:25 escabot [<c01729aa>] core_sys_select+0x1fa/0x2f0
> Sep 13 15:31:25 escabot [<c0162a21>] cache_alloc_refill+0x2e1/0x500
> Sep 13 15:31:25 escabot [<c0162736>] kmem_cache_alloc+0x46/0x50
> Sep 13 15:31:25 escabot [<c016e238>] getname+0x28/0xf0
> Sep 13 15:31:25 escabot [<c016fe5e>] __user_walk_fd+0x1e/0x60
> Sep 13 15:31:25 escabot [<c0168e29>] cp_new_stat64+0xf9/0x110
> Sep 13 15:31:25 escabot [<c0169152>] vfs_stat_fd+0x22/0x60
> Sep 13 15:31:25 escabot [<c016922f>] sys_stat64+0xf/0x30
> Sep 13 15:31:25 escabot [<c0124956>] do_gettimeofday+0x36/0xf0
> Sep 13 15:31:25 escabot [<c0120bbf>] sys_time+0xf/0x30
> Sep 13 15:31:25 escabot [<c0102b10>] sysenter_past_esp+0x5d/0x81
> Sep 13 15:31:25 escabot =======================
> Sep 13 15:31:25 escabot Mem-info:
> Sep 13 15:31:25 escabot DMA per-cpu:
> Sep 13 15:31:25 escabot CPU    0: Hot: hi:    0, btch:   1 usd:   0    
> Cold: hi:    0, btch:   1 usd:   0
> Sep 13 15:31:25 escabot CPU    1: Hot: hi:    0, btch:   1 usd:   0    
> Cold: hi:    0, btch:   1 usd:   0
> Sep 13 15:31:25 escabot CPU    2: Hot: hi:    0, btch:   1 usd:   0    
> Cold: hi:    0, btch:   1 usd:   0
> Sep 13 15:31:25 escabot CPU    3: Hot: hi:    0, btch:   1 usd:   0    
> Cold: hi:    0, btch:   1 usd:   0
> Sep 13 15:31:25 escabot Normal per-cpu:
> Sep 13 15:31:25 escabot CPU    0: Hot: hi:  186, btch:  31 usd: 143    
> Cold: hi:   62, btch:  15 usd:  56
> Sep 13 15:31:25 escabot CPU    1: Hot: hi:  186, btch:  31 usd:  31    
> Cold: hi:   62, btch:  15 usd:  53
> Sep 13 15:31:25 escabot CPU    2: Hot: hi:  186, btch:  31 usd:   8    
> Cold: hi:   62, btch:  15 usd:  54
> Sep 13 15:31:25 escabot CPU    3: Hot: hi:  186, btch:  31 usd:  99    
> Cold: hi:   62, btch:  15 usd:  58
> Sep 13 15:31:25 escabot HighMem per-cpu:
> Sep 13 15:31:25 escabot CPU    0: Hot: hi:  186, btch:  31 usd:   1    
> Cold: hi:   62, btch:  15 usd:  10
> Sep 13 15:31:25 escabot CPU    1: Hot: hi:  186, btch:  31 usd:  23    
> Cold: hi:   62, btch:  15 usd:   1
> Sep 13 15:31:25 escabot CPU    2: Hot: hi:  186, btch:  31 usd:  18    
> Cold: hi:   62, btch:  15 usd:   1
> Sep 13 15:31:25 escabot CPU    3: Hot: hi:  186, btch:  31 usd:   9    
> Cold: hi:   62, btch:  15 usd:  12
> Sep 13 15:31:25 escabot Active:8078 inactive:1776744 dirty:10338  
> writeback:0 unstable:0
> Sep 13 15:31:25 escabot free:1090395 slab:198893 mapped:988  
> pagetables:129 bounce:0
> Sep 13 15:31:25 escabot DMA free:3560kB min:68kB low:84kB high:100kB  
> active:0kB inactive:0kB present:16256kB pages_scanned:0  
> all_unreclaimable? yes
> Sep 13 15:31:25 escabot lowmem_reserve[]: 0 873 12176
> Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB high: 
> 5616kB active:0kB inactive:3160kB present:894080kB pages_scanned:5336  
> all_unreclaimable? yes
> Sep 13 15:31:25 escabot lowmem_reserve[]: 0 0 90424
> Sep 13 15:31:25 escabot HighMem free:4354372kB min:512kB low:12640kB  
> high:24768kB active:32312kB inactive:7103816kB present:11574272kB  
> pages_scanned:0 all_unreclaimable? no
> Sep 13 15:31:25 escabot lowmem_reserve[]: 0 0 0
> Sep 13 15:31:25 escabot DMA: 4*4kB 3*8kB 0*16kB 0*32kB 1*64kB 1*128kB  
> 1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3560kB
> Sep 13 15:31:25 escabot Normal: 44*4kB 18*8kB 6*16kB 1*32kB 6*64kB  
> 2*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 3648kB
> Sep 13 15:31:25 escabot HighMem: 1*4kB 0*8kB 0*16kB 37914*32kB  
> 43188*64kB 1628*128kB 5*256kB 247*512kB 30*1024kB 1*2048kB 2*4096kB =  
> 4354372kB
> Sep 13 15:31:25 escabot Swap cache: add 983, delete 972, find 0/1,  
> race 0+0
> Sep 13 15:31:25 escabot Free swap  = 2004216kB
> Sep 13 15:31:25 escabot Total swap = 2008116kB
> Sep 13 15:31:25 escabot Free swap:       2004216kB


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-15 10:08 ` Peter Zijlstra
@ 2007-09-15 10:50   ` Anton Altaparmakov
  2007-09-15 11:19     ` Peter Zijlstra
  2007-09-15 10:52   ` Andrew Morton
  1 sibling, 1 reply; 24+ messages in thread
From: Anton Altaparmakov @ 2007-09-15 10:50 UTC (permalink / raw)
  To: marc.smith, Peter Zijlstra
  Cc: Linux Kernel Mailing List, Linux Memory Management List

Hi,

Thanks for looking into this.

On 15 Sep 2007, at 11:08, Peter Zijlstra wrote:
> On Sat, 2007-09-15 at 08:27 +0100, Anton Altaparmakov wrote:
>
> Please, don't word wrap log-files, they're hard enough to read without
> it :-(
>
> ( I see people do this more and more often, *WHY*? is that because we
> like 80 char lines, in code and email? )

I haven't word wrapped it at all.  The lines appear as whole lines in  
Apple Mail (my email client).  It must be your email client that is  
wrapping them...

> Anyway, looks like all of zone_normal is pinned in kernel allocations:
>
>> Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB  
>> high: 5616kB active:0kB inactive:3160kB present:894080kB  
>> pages_scanned:5336 all_unreclaimable? yes
>
> Out of the 870 odd mb only 3 is on the lru.
>
> Would be grand it you could have a look at slabinfo and the like.

Ok, Marc, would you be able to do the copy to /dev/null again (booted  
without the ram=2G option) and when you see that most of the memory  
has gone run:

cat /proc/slabinfo > slabinfo.txt
cat /proc/meminfo > meminfo.txt
cat /proc/vmstat > vmstat.txt
cat /proc/zoneinfo > zoneinfo.txt

Then quickly abort the copy with "Ctrl+C" so hopefully you will not  
get a kernel panic...

Please send the content of the generated files.  Thanks.

That should hopefully show what in the kernel has eaten all the low  
memory...

Best regards,

	Anton
-- 
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-15 10:08 ` Peter Zijlstra
  2007-09-15 10:50   ` Anton Altaparmakov
@ 2007-09-15 10:52   ` Andrew Morton
  2007-09-17 14:04     ` Anton Altaparmakov
  1 sibling, 1 reply; 24+ messages in thread
From: Andrew Morton @ 2007-09-15 10:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Anton Altaparmakov, Linux Kernel Mailing List,
	Linux Memory Management List, marc.smith

On Sat, 15 Sep 2007 12:08:17 +0200 Peter Zijlstra <peterz@infradead.org> wrote:

> On Sat, 2007-09-15 at 08:27 +0100, Anton Altaparmakov wrote:
> 
> Please, don't word wrap log-files, they're hard enough to read without
> it :-(
> 
> ( I see people do this more and more often, *WHY*? is that because we
> like 80 char lines, in code and email? )

Isn't it?

> 
> Anyway, looks like all of zone_normal is pinned in kernel allocations:
> 
> > Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB high: 5616kB active:0kB inactive:3160kB present:894080kB pages_scanned:5336 all_unreclaimable? yes
> 
> Out of the 870 odd mb only 3 is on the lru.
> 
> Would be grand it you could have a look at slabinfo and the like.

Definitely.

> > Sep 13 15:31:25 escabot free:1090395 slab:198893 mapped:988  
> > pagetables:129 bounce:0

814,665,728 bytes of slab.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-15 10:50   ` Anton Altaparmakov
@ 2007-09-15 11:19     ` Peter Zijlstra
  2007-09-16  7:22       ` Kyle Moffett
  0 siblings, 1 reply; 24+ messages in thread
From: Peter Zijlstra @ 2007-09-15 11:19 UTC (permalink / raw)
  To: Anton Altaparmakov
  Cc: marc.smith, Linux Kernel Mailing List,
	Linux Memory Management List

On Sat, 2007-09-15 at 11:50 +0100, Anton Altaparmakov wrote:
> Hi,
> 
> Thanks for looking into this.
> 
> On 15 Sep 2007, at 11:08, Peter Zijlstra wrote:
> > On Sat, 2007-09-15 at 08:27 +0100, Anton Altaparmakov wrote:
> >
> > Please, don't word wrap log-files, they're hard enough to read without
> > it :-(
> >
> > ( I see people do this more and more often, *WHY*? is that because we
> > like 80 char lines, in code and email? )
> 
> I haven't word wrapped it at all.  The lines appear as whole lines in  
> Apple Mail (my email client).  It must be your email client that is  
> wrapping them...
> 
> > Anyway, looks like all of zone_normal is pinned in kernel allocations:
> >
> >> Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB  
> >> high: 5616kB active:0kB inactive:3160kB present:894080kB  
> >> pages_scanned:5336 all_unreclaimable? yes
> >

Oddly, this line is still long in Andrew's reply but wrapped in yours.
Must be some odd mailer interaction.

/me apologises.

I guess I really have to start looking for a hackable GUI email client
that has semi proper IMAP support.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-15 11:19     ` Peter Zijlstra
@ 2007-09-16  7:22       ` Kyle Moffett
  2007-09-16 16:16         ` Peter Zijlstra
  0 siblings, 1 reply; 24+ messages in thread
From: Kyle Moffett @ 2007-09-16  7:22 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Anton Altaparmakov, marc.smith, Linux Kernel Mailing List,
	Linux Memory Management List

On Sep 15, 2007, at 07:19:01, Peter Zijlstra wrote:
> On Sat, 2007-09-15 at 11:50 +0100, Anton Altaparmakov wrote:
>> I haven't word wrapped it at all.  The lines appear as whole lines  
>> in Apple Mail (my email client).  It must be your email client  
>> that is wrapping them...
>
> Oddly, this line is still long in Andrew's reply but wrapped in  
> yours.  Must be some odd mailer interaction.

Actually Apple Mail.app sends format=flowed wrapped to 73  
characters.  So a wrapped line has a single space character right  
before each 'wrapping' newline.  If your mail client supports  
format=flowed viewing and sends without format=flowed (like AKPM's  
mailer appears to), then it will properly unwrap the lines and resend  
without the wrapping.  Mailers which *DONT* support format=flowed  
will see the wrapped version.  Normally this is what you want but  
it's a PITA for patches and logfiles.

I believe with Mail.app if you attach a .txt file it will be  
unmangled and sent as "Content-Type: text/plain" and "Content- 
Disposition: inline", so most email-clients will display it as part  
of the message.

Cheers,
Kyle Moffett

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-16  7:22       ` Kyle Moffett
@ 2007-09-16 16:16         ` Peter Zijlstra
  0 siblings, 0 replies; 24+ messages in thread
From: Peter Zijlstra @ 2007-09-16 16:16 UTC (permalink / raw)
  To: Kyle Moffett
  Cc: Anton Altaparmakov, marc.smith, Linux Kernel Mailing List,
	Linux Memory Management List

On Sun, 16 Sep 2007 03:22:11 -0400 Kyle Moffett <mrmacman_g4@mac.com>
wrote:

> On Sep 15, 2007, at 07:19:01, Peter Zijlstra wrote:
> > On Sat, 2007-09-15 at 11:50 +0100, Anton Altaparmakov wrote:
> >> I haven't word wrapped it at all.  The lines appear as whole lines  
> >> in Apple Mail (my email client).  It must be your email client  
> >> that is wrapping them...
> >
> > Oddly, this line is still long in Andrew's reply but wrapped in  
> > yours.  Must be some odd mailer interaction.
> 
> Actually Apple Mail.app sends format=flowed wrapped to 73  
> characters.  So a wrapped line has a single space character right  
> before each 'wrapping' newline.  If your mail client supports  
> format=flowed viewing and sends without format=flowed (like AKPM's  
> mailer appears to), then it will properly unwrap the lines and resend  
> without the wrapping.  Mailers which *DONT* support format=flowed  
> will see the wrapped version.  Normally this is what you want but  
> it's a PITA for patches and logfiles.
> 
> I believe with Mail.app if you attach a .txt file it will be  
> unmangled and sent as "Content-Type: text/plain" and "Content- 
> Disposition: inline", so most email-clients will display it as part  
> of the message.

Ah, thanks for the hint. I've switched mailer to claws-mail-3.0.0 and
hacked the thing a bit so its useful. One of the things I added was a
menu entry for respect_flowed_format. So I can now flip between flowed
and non-flowed with a key.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-17 14:09       ` Anton Altaparmakov
@ 2007-09-16 22:28         ` Nick Piggin
  2007-09-17 17:04           ` Rik van Riel
  0 siblings, 1 reply; 24+ messages in thread
From: Nick Piggin @ 2007-09-16 22:28 UTC (permalink / raw)
  To: Anton Altaparmakov, Andrew Morton
  Cc: Peter Zijlstra, Linux Kernel Mailing List,
	Linux Memory Management List, marc.smith

On Tuesday 18 September 2007 00:09, Anton Altaparmakov wrote:
> On 17 Sep 2007, at 15:04, Anton Altaparmakov wrote:
> > On 15 Sep 2007, at 11:52, Andrew Morton wrote:
> >> On Sat, 15 Sep 2007 12:08:17 +0200 Peter Zijlstra
> >>
> >> <peterz@infradead.org> wrote:
> >>> Anyway, looks like all of zone_normal is pinned in kernel
> >>>
> >>> allocations:
> >>>> Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB
> >>>> high: 5616kB active:0kB inactive:3160kB present:894080kB
> >>>> pages_scanned:5336 all_unreclaimable? yes
> >>>
> >>> Out of the 870 odd mb only 3 is on the lru.
> >>>
> >>> Would be grand it you could have a look at slabinfo and the like.
> >>
> >> Definitely.
> >>
> >>>> Sep 13 15:31:25 escabot free:1090395 slab:198893 mapped:988
> >>>> pagetables:129 bounce:0
> >>
> >> 814,665,728 bytes of slab.
> >
> > Marc emailed me the contents of /proc/
> > {slabinfo,meminfo,vmstat,zoneinfo} taken just a few seconds before
> > the machine panic()ed due to running OOM completely...  They files
> > are attached this time rather than inlined so people don't complain
> > about line wrapping!  (No doubt people will not complain about them
> > being attached!  )-:)
> >
> > If I read it correctly it appears all of low memory is eaten up by
> > buffer_heads.
> >
> > <quote>
> > # name            <active_objs> <num_objs> <objsize> <objperslab>
> > <pagesperslab>
> >
> > : tunables <limit> <batchcount> <sharedfactor> : slabdata
> >
> > <active_slabs> <num_s
> > labs> <sharedavail>
> > buffer_head       12569528 12569535     56   67    1 : tunables
> > 120   60    8 :
> > slabdata 187605 187605      0
> > </quote>
> >
> > That is 671MiB of low memory in buffer_heads.
>
> I meant that is 732MiB of low memory in buffer_heads.  (12569535
> num_objs / 67 objperslab * 1 pagesperslab * 4096 PAGE_SIZE)

There is a sledgehammer in there which is supposed to alleviate
this problem. vmscan.c has buffer_heads_over_limit (could you check
if that is kicking in? (add a line in mm/page_alloc.c:show_free_areas()
to check it).

However, I'd guess the logic should be pretty robust. So it looks like
highmem is not getting scanned, so the buffer heads are not ever
getting a chance to be stripped off highmem pages!

(Rik has a patch sitting in -mm I believe which would make this problem
even worse, by doing even less highmem scanning in response to lowmem
allocations). However your user isn't using that kernel, so I wonder why it
isn't scanning highmem... it's been a while since I looked at reclaim, but
AFAIR it *should* be scanning it.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-15 10:52   ` Andrew Morton
@ 2007-09-17 14:04     ` Anton Altaparmakov
  2007-09-17 14:09       ` Anton Altaparmakov
  2007-09-17 14:32       ` Peter Zijlstra
  0 siblings, 2 replies; 24+ messages in thread
From: Anton Altaparmakov @ 2007-09-17 14:04 UTC (permalink / raw)
  To: Andrew Morton, Peter Zijlstra
  Cc: Linux Kernel Mailing List, Linux Memory Management List,
	marc.smith

[-- Attachment #1: Type: text/plain, Size: 1999 bytes --]

On 15 Sep 2007, at 11:52, Andrew Morton wrote:
> On Sat, 15 Sep 2007 12:08:17 +0200 Peter Zijlstra  
> <peterz@infradead.org> wrote:
>> Anyway, looks like all of zone_normal is pinned in kernel  
>> allocations:
>>
>>> Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB  
>>> high: 5616kB active:0kB inactive:3160kB present:894080kB  
>>> pages_scanned:5336 all_unreclaimable? yes
>>
>> Out of the 870 odd mb only 3 is on the lru.
>>
>> Would be grand it you could have a look at slabinfo and the like.
>
> Definitely.
>
>>> Sep 13 15:31:25 escabot free:1090395 slab:198893 mapped:988
>>> pagetables:129 bounce:0
>
> 814,665,728 bytes of slab.

Marc emailed me the contents of /proc/ 
{slabinfo,meminfo,vmstat,zoneinfo} taken just a few seconds before  
the machine panic()ed due to running OOM completely...  They files  
are attached this time rather than inlined so people don't complain  
about line wrapping!  (No doubt people will not complain about them  
being attached!  )-:)

If I read it correctly it appears all of low memory is eaten up by  
buffer_heads.

<quote>
# name            <active_objs> <num_objs> <objsize> <objperslab>  
<pagesperslab>
: tunables <limit> <batchcount> <sharedfactor> : slabdata  
<active_slabs> <num_s
labs> <sharedavail>
buffer_head       12569528 12569535     56   67    1 : tunables   
120   60    8 :
slabdata 187605 187605      0
</quote>

That is 671MiB of low memory in buffer_heads.

But why is the kernel not reclaiming them by getting rid of the page  
cache pages they are attached to or even leaving the pages around but  
killing their buffers?

I don't think I am doing anything in NTFS to cause this problem to  
happen...  Other than using buffer heads for my page cache pages that  
is but that is hardly a crime!  /-;

Best regards,

	Anton
-- 
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/


[-- Attachment #2: slabinfo.txt --]
[-- Type: text/plain, Size: 16276 bytes --]

slabinfo - version: 2.1
# name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
jbd_4k                 0      0   4096    1    1 : tunables   24   12    8 : slabdata      0      0      0
scsi_cmd_cache        48     48    320   12    1 : tunables   54   27    8 : slabdata      4      4      0
qla2xxx_srbs         283    330    128   30    1 : tunables  120   60    8 : slabdata     11     11      0
rpc_buffers            8      8   2048    2    1 : tunables   24   12    8 : slabdata      4      4      0
rpc_tasks              8     20    192   20    1 : tunables  120   60    8 : slabdata      1      1      0
rpc_inode_cache        0      0    448    9    1 : tunables   54   27    8 : slabdata      0      0      0
UNIX                   4     27    448    9    1 : tunables   54   27    8 : slabdata      3      3      0
flow_cache             0      0    128   30    1 : tunables  120   60    8 : slabdata      0      0      0
cfq_ioc_pool           0      0     88   44    1 : tunables  120   60    8 : slabdata      0      0      0
cfq_pool               0      0     96   40    1 : tunables  120   60    8 : slabdata      0      0      0
mqueue_inode_cache      1      7    576    7    1 : tunables   54   27    8 : slabdata      1      1      0
xfs_chashlist          0      0     24  145    1 : tunables  120   60    8 : slabdata      0      0      0
xfs_ili                0      0    140   28    1 : tunables  120   60    8 : slabdata      0      0      0
xfs_inode              0      0    448    9    1 : tunables   54   27    8 : slabdata      0      0      0
xfs_efi_item           0      0    260   15    1 : tunables   54   27    8 : slabdata      0      0      0
xfs_efd_item           0      0    260   15    1 : tunables   54   27    8 : slabdata      0      0      0
xfs_buf_item           0      0    148   26    1 : tunables  120   60    8 : slabdata      0      0      0
xfs_acl                0      0    304   13    1 : tunables   54   27    8 : slabdata      0      0      0
xfs_ifork              0      0     56   67    1 : tunables  120   60    8 : slabdata      0      0      0
xfs_dabuf              0      0     16  203    1 : tunables  120   60    8 : slabdata      0      0      0
xfs_da_state           0      0    336   11    1 : tunables   54   27    8 : slabdata      0      0      0
xfs_trans              0      0    636    6    1 : tunables   54   27    8 : slabdata      0      0      0
xfs_btree_cur          0      0    140   28    1 : tunables  120   60    8 : slabdata      0      0      0
xfs_bmap_free_item      0      0     16  203    1 : tunables  120   60    8 : slabdata      0      0      0
xfs_buf                0      0    256   15    1 : tunables  120   60    8 : slabdata      0      0      0
xfs_ioend             32     67     56   67    1 : tunables  120   60    8 : slabdata      1      1      0
xfs_vnode              0      0    384   10    1 : tunables   54   27    8 : slabdata      0      0      0
udf_inode_cache        0      0    404    9    1 : tunables   54   27    8 : slabdata      0      0      0
ntfs_big_inode_cache     23     28    576    7    1 : tunables   54   27    8 : slabdata      4      4      0
ntfs_inode_cache       0      0    184   21    1 : tunables  120   60    8 : slabdata      0      0      0
ntfs_name_cache        0      0    512    8    1 : tunables   54   27    8 : slabdata      0      0      0
ntfs_attr_ctx_cache      0      0     32  113    1 : tunables  120   60    8 : slabdata      0      0      0
ntfs_index_ctx_cache      0      0     64   59    1 : tunables  120   60    8 : slabdata      0      0      0
nfs_write_data        36     36    448    9    1 : tunables   54   27    8 : slabdata      4      4      0
nfs_read_data         32     36    448    9    1 : tunables   54   27    8 : slabdata      4      4      0
nfs_inode_cache        0      0    612    6    1 : tunables   54   27    8 : slabdata      0      0      0
nfs_page               0      0     64   59    1 : tunables  120   60    8 : slabdata      0      0      0
isofs_inode_cache      0      0    376   10    1 : tunables   54   27    8 : slabdata      0      0      0
fat_inode_cache        0      0    408    9    1 : tunables   54   27    8 : slabdata      0      0      0
fat_cache              0      0     20  169    1 : tunables  120   60    8 : slabdata      0      0      0
ext2_inode_cache       2     16    464    8    1 : tunables   54   27    8 : slabdata      2      2      0
journal_handle        32    169     20  169    1 : tunables  120   60    8 : slabdata      1      1      0
journal_head          40     72     52   72    1 : tunables  120   60    8 : slabdata      1      1      0
revoke_table           4    254     12  254    1 : tunables  120   60    8 : slabdata      1      1      0
revoke_record          0      0     16  203    1 : tunables  120   60    8 : slabdata      0      0      0
ext3_inode_cache     324    368    500    8    1 : tunables   54   27    8 : slabdata     46     46      0
ext3_xattr             0      0     48   78    1 : tunables  120   60    8 : slabdata      0      0      0
reiser_inode_cache      0      0    452    8    1 : tunables   54   27    8 : slabdata      0      0      0
dnotify_cache          0      0     20  169    1 : tunables  120   60    8 : slabdata      0      0      0
eventpoll_pwq          0      0     36  101    1 : tunables  120   60    8 : slabdata      0      0      0
eventpoll_epi          0      0    128   30    1 : tunables  120   60    8 : slabdata      0      0      0
inotify_event_cache      0      0     28  127    1 : tunables  120   60    8 : slabdata      0      0      0
inotify_watch_cache      1     92     40   92    1 : tunables  120   60    8 : slabdata      1      1      0
kioctx                 0      0    192   20    1 : tunables  120   60    8 : slabdata      0      0      0
kiocb                  0      0    192   20    1 : tunables  120   60    8 : slabdata      0      0      0
fasync_cache           0      0     16  203    1 : tunables  120   60    8 : slabdata      0      0      0
shmem_inode_cache   1443   1476    448    9    1 : tunables   54   27    8 : slabdata    164    164      0
posix_timers_cache      0      0     96   40    1 : tunables  120   60    8 : slabdata      0      0      0
uid_cache              0      0     64   59    1 : tunables  120   60    8 : slabdata      0      0      0
UDP-Lite               0      0    576    7    1 : tunables   54   27    8 : slabdata      0      0      0
tcp_bind_bucket        1    203     16  203    1 : tunables  120   60    8 : slabdata      1      1      0
inet_peer_cache        0      0     64   59    1 : tunables  120   60    8 : slabdata      0      0      0
secpath_cache          0      0     32  113    1 : tunables  120   60    8 : slabdata      0      0      0
xfrm_dst_cache         0      0    256   15    1 : tunables  120   60    8 : slabdata      0      0      0
ip_fib_alias          14    113     32  113    1 : tunables  120   60    8 : slabdata      1      1      0
ip_fib_hash           14    113     32  113    1 : tunables  120   60    8 : slabdata      1      1      0
ip_dst_cache          10     30    256   15    1 : tunables  120   60    8 : slabdata      2      2      0
arp_cache              3     20    192   20    1 : tunables  120   60    8 : slabdata      1      1      0
RAW                    5      7    512    7    1 : tunables   54   27    8 : slabdata      1      1      0
UDP                    0      0    576    7    1 : tunables   54   27    8 : slabdata      0      0      0
tw_sock_TCP            0      0    128   30    1 : tunables  120   60    8 : slabdata      0      0      0
request_sock_TCP       0      0     64   59    1 : tunables  120   60    8 : slabdata      0      0      0
TCP                    4      7   1152    7    2 : tunables   24   12    8 : slabdata      1      1      0
sgpool-128            39     42   2560    3    2 : tunables   24   12    8 : slabdata     14     14      0
sgpool-64             44     51   1280    3    1 : tunables   24   12    8 : slabdata     16     17      0
sgpool-32             34     54    640    6    1 : tunables   54   27    8 : slabdata      8      9      0
sgpool-16             57     60    320   12    1 : tunables   54   27    8 : slabdata      5      5      0
sgpool-8              57     80    192   20    1 : tunables  120   60    8 : slabdata      3      4      0
scsi_io_context        0      0    104   37    1 : tunables  120   60    8 : slabdata      0      0      0
blkdev_ioc            31    127     28  127    1 : tunables  120   60    8 : slabdata      1      1      0
blkdev_queue          32     32    956    4    1 : tunables   54   27    8 : slabdata      8      8      0
blkdev_requests      100    126    184   21    1 : tunables  120   60    8 : slabdata      6      6      0
biovec-256             7      8   3072    2    2 : tunables   24   12    8 : slabdata      4      4      0
biovec-128             7     10   1536    5    2 : tunables   24   12    8 : slabdata      2      2      0
biovec-64              7     10    768    5    1 : tunables   54   27    8 : slabdata      2      2      0
biovec-16              7     20    192   20    1 : tunables  120   60    8 : slabdata      1      1      0
biovec-4               7     59     64   59    1 : tunables  120   60    8 : slabdata      1      1      0
biovec-1             596    812     16  203    1 : tunables  120   60    8 : slabdata      4      4      0
bio                  862   1110    128   30    1 : tunables  120   60    8 : slabdata     37     37      0
sock_inode_cache      23     45    448    9    1 : tunables   54   27    8 : slabdata      5      5      0
skbuff_fclone_cache     24     24    320   12    1 : tunables   54   27    8 : slabdata      2      2      0
skbuff_head_cache    461    580    192   20    1 : tunables  120   60    8 : slabdata     29     29      0
file_lock_cache       15     39    100   39    1 : tunables  120   60    8 : slabdata      1      1      0
Acpi-Operand        1107   1196     40   92    1 : tunables  120   60    8 : slabdata     13     13      0
Acpi-ParseExt          0      0     44   84    1 : tunables  120   60    8 : slabdata      0      0      0
Acpi-Parse             0      0     28  127    1 : tunables  120   60    8 : slabdata      0      0      0
Acpi-State             0      0     44   84    1 : tunables  120   60    8 : slabdata      0      0      0
Acpi-Namespace       797    845     20  169    1 : tunables  120   60    8 : slabdata      5      5      0
proc_inode_cache     418    418    364   11    1 : tunables   54   27    8 : slabdata     38     38      0
sigqueue               0      0    144   27    1 : tunables  120   60    8 : slabdata      0      0      0
radix_tree_node    25298  25298    288   13    1 : tunables   54   27    8 : slabdata   1946   1946      0
bdev_cache            10     21    512    7    1 : tunables   54   27    8 : slabdata      3      3      0
sysfs_dir_cache     7276   7308     44   84    1 : tunables  120   60    8 : slabdata     87     87      0
mnt_cache             25     90    128   30    1 : tunables  120   60    8 : slabdata      3      3      0
inode_cache         1968   2057    348   11    1 : tunables   54   27    8 : slabdata    187    187      0
dentry_cache        4266   5452    136   29    1 : tunables  120   60    8 : slabdata    188    188      0
filp                 413    740    192   20    1 : tunables  120   60    8 : slabdata     37     37      0
names_cache            3      3   4096    1    1 : tunables   24   12    8 : slabdata      3      3      0
idr_layer_cache       98    116    136   29    1 : tunables  120   60    8 : slabdata      4      4      0
buffer_head       12569528 12569535     56   67    1 : tunables  120   60    8 : slabdata 187605 187605      0
mm_struct             63     63    512    7    1 : tunables   54   27    8 : slabdata      9      9      0
vm_area_struct       761   1716     88   44    1 : tunables  120   60    8 : slabdata     39     39      0
fs_cache              54    177     64   59    1 : tunables  120   60    8 : slabdata      3      3      0
files_cache           55     90    256   15    1 : tunables  120   60    8 : slabdata      6      6      0
signal_cache         100    110    384   10    1 : tunables   54   27    8 : slabdata     11     11      0
sighand_cache         81     81   1344    3    1 : tunables   24   12    8 : slabdata     27     27      0
task_struct           87     87   1296    3    1 : tunables   24   12    8 : slabdata     29     29      0
anon_vma             359    609     16  203    1 : tunables  120   60    8 : slabdata      3      3      0
pgd                  112    226     32  113    1 : tunables  120   60    8 : slabdata      2      2      0
pmd                   72     72   4096    1    1 : tunables   24   12    8 : slabdata     72     72      0
pid                   97    505     36  101    1 : tunables  120   60    8 : slabdata      5      5      0
size-131072(DMA)       0      0 131072    1   32 : tunables    8    4    0 : slabdata      0      0      0
size-131072            0      0 131072    1   32 : tunables    8    4    0 : slabdata      0      0      0
size-65536(DMA)        0      0  65536    1   16 : tunables    8    4    0 : slabdata      0      0      0
size-65536             1      1  65536    1   16 : tunables    8    4    0 : slabdata      1      1      0
size-32768(DMA)        0      0  32768    1    8 : tunables    8    4    0 : slabdata      0      0      0
size-32768             5      5  32768    1    8 : tunables    8    4    0 : slabdata      5      5      0
size-16384(DMA)        0      0  16384    1    4 : tunables    8    4    0 : slabdata      0      0      0
size-16384             4      4  16384    1    4 : tunables    8    4    0 : slabdata      4      4      0
size-8192(DMA)         0      0   8192    1    2 : tunables    8    4    0 : slabdata      0      0      0
size-8192             84     84   8192    1    2 : tunables    8    4    0 : slabdata     84     84      0
size-4096(DMA)         0      0   4096    1    1 : tunables   24   12    8 : slabdata      0      0      0
size-4096             36     36   4096    1    1 : tunables   24   12    8 : slabdata     36     36      0
size-2048(DMA)         0      0   2048    2    1 : tunables   24   12    8 : slabdata      0      0      0
size-2048            575    580   2048    2    1 : tunables   24   12    8 : slabdata    290    290      0
size-1024(DMA)         0      0   1024    4    1 : tunables   54   27    8 : slabdata      0      0      0
size-1024            196    196   1024    4    1 : tunables   54   27    8 : slabdata     49     49      0
size-512(DMA)          0      0    512    8    1 : tunables   54   27    8 : slabdata      0      0      0
size-512            1096   1208    512    8    1 : tunables   54   27    8 : slabdata    151    151      0
size-256(DMA)          0      0    256   15    1 : tunables  120   60    8 : slabdata      0      0      0
size-256             256    270    256   15    1 : tunables  120   60    8 : slabdata     18     18      0
size-192(DMA)          0      0    192   20    1 : tunables  120   60    8 : slabdata      0      0      0
size-192             140    140    192   20    1 : tunables  120   60    8 : slabdata      7      7      0
size-128(DMA)          0      0    128   30    1 : tunables  120   60    8 : slabdata      0      0      0
size-128             700    780    128   30    1 : tunables  120   60    8 : slabdata     26     26      0
size-64(DMA)           0      0     64   59    1 : tunables  120   60    8 : slabdata      0      0      0
size-32(DMA)           0      0     32  113    1 : tunables  120   60    8 : slabdata      0      0      0
size-64             2388   2478     64   59    1 : tunables  120   60    8 : slabdata     42     42      0
size-32             4299   5198     32  113    1 : tunables  120   60    8 : slabdata     46     46      0
kmem_cache           149    180    128   30    1 : tunables  120   60    8 : slabdata      6      6      0

[-- Attachment #3: Type: text/plain, Size: 1 bytes --]



[-- Attachment #4: meminfo.txt --]
[-- Type: text/plain, Size: 728 bytes --]

MemTotal:     12306460 kB
MemFree:       5228260 kB
Buffers:          3968 kB
Cached:        6293928 kB
SwapCached:          0 kB
Active:          10756 kB
Inactive:      6291416 kB
HighTotal:    11493340 kB
HighFree:      5191416 kB
LowTotal:       813120 kB
LowFree:         36844 kB
SwapTotal:     2008116 kB
SwapFree:      2008116 kB
Dirty:               4 kB
Writeback:           0 kB
AnonPages:        4328 kB
Mapped:           3100 kB
Slab:           766572 kB
SReclaimable:   752292 kB
SUnreclaim:      14280 kB
PageTables:        320 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:   8161344 kB
Committed_AS:    19520 kB
VmallocTotal:   118776 kB
VmallocUsed:      3028 kB
VmallocChunk:   115384 kB

[-- Attachment #5: vmstat.txt --]
[-- Type: text/plain, Size: 808 bytes --]

nr_free_pages 1307065
nr_active 1572854
nr_inactive 2689
nr_anon_pages 1082
nr_mapped 775
nr_file_pages 1574474
nr_dirty 1
nr_writeback 0
nr_slab_reclaimable 188073
nr_slab_unreclaimable 3570
nr_page_table_pages 80
nr_unstable 0
nr_bounce 0
nr_vmscan_write 10
pgpgin 12847034
pgpgout 3501
pswpin 0
pswpout 0
pgalloc_dma 1944
pgalloc_normal 465694
pgalloc_high 3432418
pgfree 5207478
pgactivate 17439
pgdeactivate 13952
pgfault 1211997
pgmajfault 695
pgrefill_dma 0
pgrefill_normal 13952
pgrefill_high 0
pgsteal_dma 0
pgsteal_normal 1074
pgsteal_high 0
pgscan_kswapd_dma 0
pgscan_kswapd_normal 14688
pgscan_kswapd_high 0
pgscan_direct_dma 0
pgscan_direct_normal 3808
pgscan_direct_high 0
pginodesteal 0
slabs_scanned 8448
kswapd_steal 1007
kswapd_inodesteal 1637276
pageoutrun 609
allocstall 271
pgrotated 10

[-- Attachment #6: zoneinfo.txt --]
[-- Type: text/plain, Size: 4283 bytes --]

Node 0, zone      DMA
  pages free     2828
        min      17
        low      21
        high     25
        scanned  0 (a: 14 i: 14)
        spanned  4096
        present  4064
    nr_free_pages 2828
    nr_active    0
    nr_inactive  0
    nr_anon_pages 0
    nr_mapped    1
    nr_file_pages 0
    nr_dirty     0
    nr_writeback 0
    nr_slab_reclaimable 0
    nr_slab_unreclaimable 0
    nr_page_table_pages 0
    nr_unstable  0
    nr_bounce    0
    nr_vmscan_write 0
        protection: (0, 873, 12176)
  pagesets
    cpu: 0 pcp: 0
              count: 0
              high:  0
              batch: 1
    cpu: 0 pcp: 1
              count: 0
              high:  0
              batch: 1
  vm stats threshold: 6
    cpu: 1 pcp: 0
              count: 0
              high:  0
              batch: 1
    cpu: 1 pcp: 1
              count: 0
              high:  0
              batch: 1
  vm stats threshold: 6
    cpu: 2 pcp: 0
              count: 0
              high:  0
              batch: 1
    cpu: 2 pcp: 1
              count: 0
              high:  0
              batch: 1
  vm stats threshold: 6
    cpu: 3 pcp: 0
              count: 0
              high:  0
              batch: 1
    cpu: 3 pcp: 1
              count: 0
              high:  0
              batch: 1
  vm stats threshold: 6
  all_unreclaimable: 0
  prev_priority:     12
  start_pfn:         0
Node 0, zone   Normal
  pages free     6383
        min      936
        low      1170
        high     1404
        scanned  0 (a: 20 i: 20)
        spanned  225280
        present  223520
    nr_free_pages 6383
    nr_active    489
    nr_inactive  495
    nr_anon_pages 0
    nr_mapped    0
    nr_file_pages 988
    nr_dirty     1
    nr_writeback 0
    nr_slab_reclaimable 188073
    nr_slab_unreclaimable 3570
    nr_page_table_pages 80
    nr_unstable  0
    nr_bounce    0
    nr_vmscan_write 10
        protection: (0, 0, 90424)
  pagesets
    cpu: 0 pcp: 0
              count: 19
              high:  186
              batch: 31
    cpu: 0 pcp: 1
              count: 1
              high:  62
              batch: 15
  vm stats threshold: 24
    cpu: 1 pcp: 0
              count: 25
              high:  186
              batch: 31
    cpu: 1 pcp: 1
              count: 0
              high:  62
              batch: 15
  vm stats threshold: 24
    cpu: 2 pcp: 0
              count: 13
              high:  186
              batch: 31
    cpu: 2 pcp: 1
              count: 54
              high:  62
              batch: 15
  vm stats threshold: 24
    cpu: 3 pcp: 0
              count: 21
              high:  186
              batch: 31
    cpu: 3 pcp: 1
              count: 45
              high:  62
              batch: 15
  vm stats threshold: 24
  all_unreclaimable: 0
  prev_priority:     12
  start_pfn:         4096
Node 0, zone  HighMem
  pages free     1297805
        min      128
        low      3160
        high     6192
        scanned  0 (a: 0 i: 0)
        spanned  2916351
        present  2893568
    nr_free_pages 1297805
    nr_active    1572438
    nr_inactive  2194
    nr_anon_pages 1082
    nr_mapped    774
    nr_file_pages 1573559
    nr_dirty     0
    nr_writeback 0
    nr_slab_reclaimable 0
    nr_slab_unreclaimable 0
    nr_page_table_pages 0
    nr_unstable  0
    nr_bounce    0
    nr_vmscan_write 0
        protection: (0, 0, 0)
  pagesets
    cpu: 0 pcp: 0
              count: 19
              high:  186
              batch: 31
    cpu: 0 pcp: 1
              count: 12
              high:  62
              batch: 15
  vm stats threshold: 48
    cpu: 1 pcp: 0
              count: 6
              high:  186
              batch: 31
    cpu: 1 pcp: 1
              count: 9
              high:  62
              batch: 15
  vm stats threshold: 48
    cpu: 2 pcp: 0
              count: 151
              high:  186
              batch: 31
    cpu: 2 pcp: 1
              count: 8
              high:  62
              batch: 15
  vm stats threshold: 48
    cpu: 3 pcp: 0
              count: 55
              high:  186
              batch: 31
    cpu: 3 pcp: 1
              count: 4
              high:  62
              batch: 15
  vm stats threshold: 48
  all_unreclaimable: 0
  prev_priority:     12
  start_pfn:         229376

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-17 14:04     ` Anton Altaparmakov
@ 2007-09-17 14:09       ` Anton Altaparmakov
  2007-09-16 22:28         ` Nick Piggin
  2007-09-17 14:32       ` Peter Zijlstra
  1 sibling, 1 reply; 24+ messages in thread
From: Anton Altaparmakov @ 2007-09-17 14:09 UTC (permalink / raw)
  To: Andrew Morton, Peter Zijlstra
  Cc: Linux Kernel Mailing List, Linux Memory Management List,
	marc.smith

On 17 Sep 2007, at 15:04, Anton Altaparmakov wrote:
> On 15 Sep 2007, at 11:52, Andrew Morton wrote:
>> On Sat, 15 Sep 2007 12:08:17 +0200 Peter Zijlstra  
>> <peterz@infradead.org> wrote:
>>> Anyway, looks like all of zone_normal is pinned in kernel  
>>> allocations:
>>>
>>>> Sep 13 15:31:25 escabot Normal free:3648kB min:3744kB low:4680kB  
>>>> high: 5616kB active:0kB inactive:3160kB present:894080kB  
>>>> pages_scanned:5336 all_unreclaimable? yes
>>>
>>> Out of the 870 odd mb only 3 is on the lru.
>>>
>>> Would be grand it you could have a look at slabinfo and the like.
>>
>> Definitely.
>>
>>>> Sep 13 15:31:25 escabot free:1090395 slab:198893 mapped:988
>>>> pagetables:129 bounce:0
>>
>> 814,665,728 bytes of slab.
>
> Marc emailed me the contents of /proc/ 
> {slabinfo,meminfo,vmstat,zoneinfo} taken just a few seconds before  
> the machine panic()ed due to running OOM completely...  They files  
> are attached this time rather than inlined so people don't complain  
> about line wrapping!  (No doubt people will not complain about them  
> being attached!  )-:)
>
> If I read it correctly it appears all of low memory is eaten up by  
> buffer_heads.
>
> <quote>
> # name            <active_objs> <num_objs> <objsize> <objperslab>  
> <pagesperslab>
> : tunables <limit> <batchcount> <sharedfactor> : slabdata  
> <active_slabs> <num_s
> labs> <sharedavail>
> buffer_head       12569528 12569535     56   67    1 : tunables   
> 120   60    8 :
> slabdata 187605 187605      0
> </quote>
>
> That is 671MiB of low memory in buffer_heads.

I meant that is 732MiB of low memory in buffer_heads.  (12569535  
num_objs / 67 objperslab * 1 pagesperslab * 4096 PAGE_SIZE)

> But why is the kernel not reclaiming them by getting rid of the  
> page cache pages they are attached to or even leaving the pages  
> around but killing their buffers?
>
> I don't think I am doing anything in NTFS to cause this problem to  
> happen...  Other than using buffer heads for my page cache pages  
> that is but that is hardly a crime!  /-;

Best regards,

	Anton
-- 
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-17 14:04     ` Anton Altaparmakov
  2007-09-17 14:09       ` Anton Altaparmakov
@ 2007-09-17 14:32       ` Peter Zijlstra
  2007-09-17 17:11         ` Rik van Riel
  1 sibling, 1 reply; 24+ messages in thread
From: Peter Zijlstra @ 2007-09-17 14:32 UTC (permalink / raw)
  To: Anton Altaparmakov
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, marc.smith

On Mon, 17 Sep 2007 15:04:05 +0100 Anton Altaparmakov <aia21@cam.ac.uk>
wrote:

> They files  
> are attached this time rather than inlined so people don't complain  
> about line wrapping!  (No doubt people will not complain about them  
> being attached!  )-:)

I switched mailer after I learnt about flowed stuffs. Still,
appreciated.

> If I read it correctly it appears all of low memory is eaten up by  
> buffer_heads.
> 
> <quote>
> # name            <active_objs> <num_objs> <objsize> <objperslab>  
> <pagesperslab>
> : tunables <limit> <batchcount> <sharedfactor> : slabdata  
> <active_slabs> <num_s
> labs> <sharedavail>
> buffer_head       12569528 12569535     56   67    1 : tunables   
> 120   60    8 :
> slabdata 187605 187605      0
> </quote>
> 
> That is 671MiB of low memory in buffer_heads.
> 
> But why is the kernel not reclaiming them by getting rid of the page  
> cache pages they are attached to or even leaving the pages around but  
> killing their buffers?

Well, you see, you have this very odd configuration where:

11GB highmem
 1GB normal

pagecache pages go into highmem
buggerheads go into normal

I'm guessing there is no pressure at all on zone_highmem so the
kernel will not try to reclaim pagecache. And because the pagecache
pages are happily sitting there, the buggerheads are pinned and do not
get reclaimed.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-16 22:28         ` Nick Piggin
@ 2007-09-17 17:04           ` Rik van Riel
  2007-09-17 17:12             ` Nick Piggin
  0 siblings, 1 reply; 24+ messages in thread
From: Rik van Riel @ 2007-09-17 17:04 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Anton Altaparmakov, Andrew Morton, Peter Zijlstra,
	Linux Kernel Mailing List, Linux Memory Management List,
	marc.smith

Nick Piggin wrote:

> (Rik has a patch sitting in -mm I believe which would make this problem
> even worse, by doing even less highmem scanning in response to lowmem
> allocations). 

My patch should not make any difference here, since
balance_pgdat() already scans the zones from high to
low and sets an end_zone variable that determines the
highest zone to scan.

All my patch does is make sure that we do not try to
reclaim excessive amounts of dma or low memory when
a higher zone is full.

-- 
Politics is the struggle between those who want to make their country
the best in the world, and those who believe it already is.  Each group
calls the other unpatriotic.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-17 14:32       ` Peter Zijlstra
@ 2007-09-17 17:11         ` Rik van Riel
  2007-09-17 20:15           ` Andrew Morton
  0 siblings, 1 reply; 24+ messages in thread
From: Rik van Riel @ 2007-09-17 17:11 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Anton Altaparmakov, Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, marc.smith

Peter Zijlstra wrote:
> On Mon, 17 Sep 2007 15:04:05 +0100 Anton Altaparmakov <aia21@cam.ac.uk>
> wrote:
> 
>> They files  
>> are attached this time rather than inlined so people don't complain  
>> about line wrapping!  (No doubt people will not complain about them  
>> being attached!  )-:)
> 
> I switched mailer after I learnt about flowed stuffs. Still,
> appreciated.
> 
>> If I read it correctly it appears all of low memory is eaten up by  
>> buffer_heads.
>>
>> <quote>
>> # name            <active_objs> <num_objs> <objsize> <objperslab>  
>> <pagesperslab>
>> : tunables <limit> <batchcount> <sharedfactor> : slabdata  
>> <active_slabs> <num_s
>> labs> <sharedavail>
>> buffer_head       12569528 12569535     56   67    1 : tunables   
>> 120   60    8 :
>> slabdata 187605 187605      0
>> </quote>
>>
>> That is 671MiB of low memory in buffer_heads.
>>
>> But why is the kernel not reclaiming them by getting rid of the page  
>> cache pages they are attached to or even leaving the pages around but  
>> killing their buffers?
> 
> Well, you see, you have this very odd configuration where:
> 
> 11GB highmem
>  1GB normal
> 
> pagecache pages go into highmem
> buggerheads go into normal
> 
> I'm guessing there is no pressure at all on zone_highmem so the
> kernel will not try to reclaim pagecache. And because the pagecache
> pages are happily sitting there, the buggerheads are pinned and do not
> get reclaimed.

I've got code for this in RHEL 3, but never bothered to
merge it upstream since I thought people with large memory
systems would be running 64 bit kernels by now.

Obviously I was wrong.  Andrew, are you interested in a
fix for this problem?

IIRC I simply kept a list of all buffer heads and walked
that to reclaim pages when the number of buffer heads is
too high (and we need memory).  This list can be maintained
in places where we already hold the lock for the buffer head
freelist, so there should be no additional locking overhead
(again, IIRC).

-- 
Politics is the struggle between those who want to make their country
the best in the world, and those who believe it already is.  Each group
calls the other unpatriotic.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-17 17:04           ` Rik van Riel
@ 2007-09-17 17:12             ` Nick Piggin
  2007-09-18 14:41               ` Rik van Riel
  0 siblings, 1 reply; 24+ messages in thread
From: Nick Piggin @ 2007-09-17 17:12 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Anton Altaparmakov, Andrew Morton, Peter Zijlstra,
	Linux Kernel Mailing List, Linux Memory Management List,
	marc.smith

On Tuesday 18 September 2007 03:04, Rik van Riel wrote:
> Nick Piggin wrote:
> > (Rik has a patch sitting in -mm I believe which would make this problem
> > even worse, by doing even less highmem scanning in response to lowmem
> > allocations).
>
> My patch should not make any difference here, since
> balance_pgdat() already scans the zones from high to
> low and sets an end_zone variable that determines the
> highest zone to scan.
>
> All my patch does is make sure that we do not try to
> reclaim excessive amounts of dma or low memory when
> a higher zone is full.

Sorry, yeah I had it the wrong way around. Your patch would not
increase the probability of this problem.

We could have some logic in there to scan highmem when buffer
heads are over limit. But that really kind of sucks in that it introduces
some arbitrary point where reclaim behaviour completely changes...
Adding a shrinker for buffer heads is the "logical" approach that we
take for other non-page caches. That also kind of sucks because we
normally don't want to do this out of band buffer reclaiming and just
have it work from page reclaim (it will introduce extra locking and list
scanning).

Maybe when the machine is near OOM, we can just change the scanning
to do all zones -- a change in scanning behaviour at that point is better
than oom kill.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-17 17:11         ` Rik van Riel
@ 2007-09-17 20:15           ` Andrew Morton
  2007-09-17 20:46             ` Rik van Riel
  0 siblings, 1 reply; 24+ messages in thread
From: Andrew Morton @ 2007-09-17 20:15 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Peter Zijlstra, Anton Altaparmakov, Linux Kernel Mailing List,
	Linux Memory Management List, marc.smith

On Mon, 17 Sep 2007 13:11:14 -0400
Rik van Riel <riel@redhat.com> wrote:

> > I'm guessing there is no pressure at all on zone_highmem so the
> > kernel will not try to reclaim pagecache. And because the pagecache
> > pages are happily sitting there, the buggerheads are pinned and do not
> > get reclaimed.

yeah, this got pretty unavoidably broken when we killed the global LRU
in 2.5.early. It's odd that it took this long for someone to hit it.

> I've got code for this in RHEL 3, but never bothered to
> merge it upstream since I thought people with large memory
> systems would be running 64 bit kernels by now.
> 
> Obviously I was wrong.  Andrew, are you interested in a
> fix for this problem?
> 
> IIRC I simply kept a list of all buffer heads and walked
> that to reclaim pages when the number of buffer heads is
> too high (and we need memory).  This list can be maintained
> in places where we already hold the lock for the buffer head
> freelist, so there should be no additional locking overhead
> (again, IIRC).

Christoph's slab defragmentation code should permit us to fix this:
grab a page of buffer_heads off the slab lists, trylock the page,
strip the buffer_heads.  I think that would be a better approach
if we can get it going because it's more general.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-17 20:15           ` Andrew Morton
@ 2007-09-17 20:46             ` Rik van Riel
  2007-09-17 21:11               ` Andrew Morton
  0 siblings, 1 reply; 24+ messages in thread
From: Rik van Riel @ 2007-09-17 20:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Peter Zijlstra, Anton Altaparmakov, Linux Kernel Mailing List,
	Linux Memory Management List, marc.smith

Andrew Morton wrote:
> On Mon, 17 Sep 2007 13:11:14 -0400
> Rik van Riel <riel@redhat.com> wrote:

>> IIRC I simply kept a list of all buffer heads and walked
>> that to reclaim pages when the number of buffer heads is
>> too high (and we need memory).  This list can be maintained
>> in places where we already hold the lock for the buffer head
>> freelist, so there should be no additional locking overhead
>> (again, IIRC).
> 
> Christoph's slab defragmentation code should permit us to fix this:
> grab a page of buffer_heads off the slab lists, trylock the page,
> strip the buffer_heads.  I think that would be a better approach
> if we can get it going because it's more general.

Is the slab defragmentation code in -mm or upstream already
or can I find it on the mailing list?

I've implemented code like you describe already, just give me
a few days to become familiar with the slab defragmentation
code and I'll get you a patch.

-- 
Politics is the struggle between those who want to make their country
the best in the world, and those who believe it already is.  Each group
calls the other unpatriotic.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-17 20:46             ` Rik van Riel
@ 2007-09-17 21:11               ` Andrew Morton
  2007-09-19 22:12                 ` Rik van Riel
  0 siblings, 1 reply; 24+ messages in thread
From: Andrew Morton @ 2007-09-17 21:11 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Peter Zijlstra, Anton Altaparmakov, Linux Kernel Mailing List,
	Linux Memory Management List, marc.smith

On Mon, 17 Sep 2007 16:46:47 -0400
Rik van Riel <riel@redhat.com> wrote:

> Andrew Morton wrote:
> > On Mon, 17 Sep 2007 13:11:14 -0400
> > Rik van Riel <riel@redhat.com> wrote:
> 
> >> IIRC I simply kept a list of all buffer heads and walked
> >> that to reclaim pages when the number of buffer heads is
> >> too high (and we need memory).  This list can be maintained
> >> in places where we already hold the lock for the buffer head
> >> freelist, so there should be no additional locking overhead
> >> (again, IIRC).
> > 
> > Christoph's slab defragmentation code should permit us to fix this:
> > grab a page of buffer_heads off the slab lists, trylock the page,
> > strip the buffer_heads.  I think that would be a better approach
> > if we can get it going because it's more general.
> 
> Is the slab defragmentation code in -mm or upstream already
> or can I find it on the mailing list?

Is on lkml and linux-mm: http://lkml.org/lkml/2007/8/31/329

> I've implemented code like you describe already, just give me
> a few days to become familiar with the slab defragmentation
> code and I'll get you a patch.

The patchset does buffer_heads: http://lkml.org/lkml/2007/8/31/348

I think the whole approach is reasonable.  It's mainly a matter of going
through it all with a toothcomb and getting it all merged up, tested and
integrated.  There's considerable potential for nasty and rarely-occurring
surprises in this stuff because it tends to approach locking in the
reversed order.

<checks the archives>

There were a few desultory comments, but I see no sign that the bulk of
the patches have had any serious review and testing from anyone yet.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-17 17:12             ` Nick Piggin
@ 2007-09-18 14:41               ` Rik van Riel
  0 siblings, 0 replies; 24+ messages in thread
From: Rik van Riel @ 2007-09-18 14:41 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Anton Altaparmakov, Andrew Morton, Peter Zijlstra,
	Linux Kernel Mailing List, Linux Memory Management List,
	marc.smith

Nick Piggin wrote:
> On Tuesday 18 September 2007 03:04, Rik van Riel wrote:
>> Nick Piggin wrote:
>>> (Rik has a patch sitting in -mm I believe which would make this problem
>>> even worse, by doing even less highmem scanning in response to lowmem
>>> allocations).
>> My patch should not make any difference here, since
>> balance_pgdat() already scans the zones from high to
>> low and sets an end_zone variable that determines the
>> highest zone to scan.
>>
>> All my patch does is make sure that we do not try to
>> reclaim excessive amounts of dma or low memory when
>> a higher zone is full.
> 
> Sorry, yeah I had it the wrong way around. Your patch would not
> increase the probability of this problem.
> 
> We could have some logic in there to scan highmem when buffer
> heads are over limit. But that really kind of sucks in that it introduces
> some arbitrary point where reclaim behaviour completely changes...
> Adding a shrinker for buffer heads is the "logical" approach 

Christoph Lameter's slab defragmenting patch set does
this.  One reason Andrew has not merged that code yet
is a lack of reviewers, so I am going through it with
a fine comb and hope to have the patches reviewed by
the end of today.

Lets get this bug fixed the right way.

-- 
Politics is the struggle between those who want to make their country
the best in the world, and those who believe it already is.  Each group
calls the other unpatriotic.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-17 21:11               ` Andrew Morton
@ 2007-09-19 22:12                 ` Rik van Riel
  2007-09-19 22:45                   ` Andrew Morton
  0 siblings, 1 reply; 24+ messages in thread
From: Rik van Riel @ 2007-09-19 22:12 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Peter Zijlstra, Anton Altaparmakov, Linux Kernel Mailing List,
	Linux Memory Management List, marc.smith

Andrew Morton wrote:
> On Mon, 17 Sep 2007 16:46:47 -0400
> Rik van Riel <riel@redhat.com> wrote:

>> Is the slab defragmentation code in -mm or upstream already
>> or can I find it on the mailing list?
> 
> Is on lkml and linux-mm: http://lkml.org/lkml/2007/8/31/329

> I think the whole approach is reasonable.  It's mainly a matter of going
> through it all with a toothcomb 

I've spent the last two days combing through the patches.

Except for the one doubt I had (resolved in email), and
one function name comment (on patch 18/26) the code looks
good to me.

-- 
Politics is the struggle between those who want to make their country
the best in the world, and those who believe it already is.  Each group
calls the other unpatriotic.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-19 22:12                 ` Rik van Riel
@ 2007-09-19 22:45                   ` Andrew Morton
  2007-09-19 22:50                     ` Rik van Riel
  2007-09-19 23:29                     ` Christoph Lameter
  0 siblings, 2 replies; 24+ messages in thread
From: Andrew Morton @ 2007-09-19 22:45 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Peter Zijlstra, Anton Altaparmakov, Linux Kernel Mailing List,
	Linux Memory Management List, marc.smith, Christoph Lameter

On Wed, 19 Sep 2007 18:12:38 -0400
Rik van Riel <riel@redhat.com> wrote:

> Andrew Morton wrote:
> > On Mon, 17 Sep 2007 16:46:47 -0400
> > Rik van Riel <riel@redhat.com> wrote:
> 
> >> Is the slab defragmentation code in -mm or upstream already
> >> or can I find it on the mailing list?
> > 
> > Is on lkml and linux-mm: http://lkml.org/lkml/2007/8/31/329
> 
> > I think the whole approach is reasonable.  It's mainly a matter of going
> > through it all with a toothcomb 
> 
> I've spent the last two days combing through the patches.

Thanks.  So would it be appropriate for Christoph to add the
thus-far-undefined reviewed-by tag to the next version?

> Except for the one doubt I had (resolved in email), and
> one function name comment (on patch 18/26)

yeah.  Basic rule: if the reviewer had to ask a question then others will
later ask themselves the same question when reading the code.  So this is a
very good indicator that there is a missing code comment.

> the code looks
> good to me.

Cool.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-19 22:45                   ` Andrew Morton
@ 2007-09-19 22:50                     ` Rik van Riel
  2007-09-19 23:29                     ` Christoph Lameter
  1 sibling, 0 replies; 24+ messages in thread
From: Rik van Riel @ 2007-09-19 22:50 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Peter Zijlstra, Anton Altaparmakov, Linux Kernel Mailing List,
	Linux Memory Management List, marc.smith, Christoph Lameter

Andrew Morton wrote:
> On Wed, 19 Sep 2007 18:12:38 -0400
> Rik van Riel <riel@redhat.com> wrote:
> 
>> Andrew Morton wrote:
>>> On Mon, 17 Sep 2007 16:46:47 -0400
>>> Rik van Riel <riel@redhat.com> wrote:
>>>> Is the slab defragmentation code in -mm or upstream already
>>>> or can I find it on the mailing list?
>>> Is on lkml and linux-mm: http://lkml.org/lkml/2007/8/31/329
>>> I think the whole approach is reasonable.  It's mainly a matter of going
>>> through it all with a toothcomb 
>> I've spent the last two days combing through the patches.
> 
> Thanks.  So would it be appropriate for Christoph to add the
> thus-far-undefined reviewed-by tag to the next version?

Absolutely.  Some of the code was not obvious at first,
but the more I looked at it the better it looked - that's
always a good sign, since most of the code seems to be
the other way around :)

-- 
Politics is the struggle between those who want to make their country
the best in the world, and those who believe it already is.  Each group
calls the other unpatriotic.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-19 22:45                   ` Andrew Morton
  2007-09-19 22:50                     ` Rik van Riel
@ 2007-09-19 23:29                     ` Christoph Lameter
  2007-09-19 23:37                       ` Andrew Morton
  1 sibling, 1 reply; 24+ messages in thread
From: Christoph Lameter @ 2007-09-19 23:29 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Peter Zijlstra, Anton Altaparmakov,
	Linux Kernel Mailing List, Linux Memory Management List,
	marc.smith

On Wed, 19 Sep 2007, Andrew Morton wrote:

> Cool.

Should I rediff on top of rc6-mm1 for submission? When will you be able to 
take it?
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-19 23:29                     ` Christoph Lameter
@ 2007-09-19 23:37                       ` Andrew Morton
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Morton @ 2007-09-19 23:37 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Rik van Riel, Peter Zijlstra, Anton Altaparmakov,
	Linux Kernel Mailing List, Linux Memory Management List,
	marc.smith

On Wed, 19 Sep 2007 16:29:02 -0700 (PDT)
Christoph Lameter <clameter@sgi.com> wrote:

> On Wed, 19 Sep 2007, Andrew Morton wrote:
> 
> > Cool.
> 
> Should I rediff on top of rc6-mm1 for submission? When will you be able to 
> take it?
>  

erm, I spose we should be concentrating on stabilising the current pile of
crud for 2.6.24 - it sure needs it.

Sometime around 2.6.24-rc1 would suit.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: VM/VFS bug with large amount of memory and file systems?
  2007-09-15  7:27 VM/VFS bug with large amount of memory and file systems? Anton Altaparmakov
  2007-09-15 10:08 ` Peter Zijlstra
@ 2007-10-03 10:51 ` Pavel Machek
  1 sibling, 0 replies; 24+ messages in thread
From: Pavel Machek @ 2007-10-03 10:51 UTC (permalink / raw)
  To: Anton Altaparmakov
  Cc: Linux Kernel Mailing List, Linux Memory Management List,
	marc.smith

On Sat 2007-09-15 08:27:23, Anton Altaparmakov wrote:
> Hi,
> 
> Mark Smith reported a OOM condition when he copies a 
> large (46GiB)  file from an NTFS partition (using the 
> stock kernel driver) to /dev/ null (or to a file on 
> ext3, same result).
> 
> The machine this runs on has an i386 kernel with 12GiB 
> RAM (yes this  is not a typo it is 12GiB!).

Yes, that's known broken. 32bit machines with to much memory just do
not work properly.

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2007-10-03 10:51 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-09-15  7:27 VM/VFS bug with large amount of memory and file systems? Anton Altaparmakov
2007-09-15 10:08 ` Peter Zijlstra
2007-09-15 10:50   ` Anton Altaparmakov
2007-09-15 11:19     ` Peter Zijlstra
2007-09-16  7:22       ` Kyle Moffett
2007-09-16 16:16         ` Peter Zijlstra
2007-09-15 10:52   ` Andrew Morton
2007-09-17 14:04     ` Anton Altaparmakov
2007-09-17 14:09       ` Anton Altaparmakov
2007-09-16 22:28         ` Nick Piggin
2007-09-17 17:04           ` Rik van Riel
2007-09-17 17:12             ` Nick Piggin
2007-09-18 14:41               ` Rik van Riel
2007-09-17 14:32       ` Peter Zijlstra
2007-09-17 17:11         ` Rik van Riel
2007-09-17 20:15           ` Andrew Morton
2007-09-17 20:46             ` Rik van Riel
2007-09-17 21:11               ` Andrew Morton
2007-09-19 22:12                 ` Rik van Riel
2007-09-19 22:45                   ` Andrew Morton
2007-09-19 22:50                     ` Rik van Riel
2007-09-19 23:29                     ` Christoph Lameter
2007-09-19 23:37                       ` Andrew Morton
2007-10-03 10:51 ` Pavel Machek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).