public inbox for kexec@lists.infradead.org
 help / color / mirror / Atom feed
* makedumpfile mmap() benchmark
  2013-02-14 10:11 [PATCH 00/13] kdump, vmcore: support mmap() on /proc/vmcore HATAYAMA Daisuke
@ 2013-03-27  5:51 ` Jingbai Ma
  2013-03-27  6:23   ` HATAYAMA Daisuke
  0 siblings, 1 reply; 5+ messages in thread
From: Jingbai Ma @ 2013-03-27  5:51 UTC (permalink / raw)
  To: HATAYAMA Daisuke, vgoyal, ebiederm, cpw, kumagai-atsushi,
	Mitchell, Lisa (MCLinux in Fort Collins), akpm
  Cc: kexec, linux-kernel, jingbai.ma

Hi,

I have tested the makedumpfile mmap patch on a machine with 2TB memory, 
here is testing results:
Test environment:
Machine: HP ProLiant DL980 G7 with 2TB RAM.
CPU: Intel(R) Xeon(R) CPU E7- 2860  @ 2.27GHz (8 sockets, 10 cores)
(Only 1 cpu was enabled the 2nd kernel)
Kernel: 3.9.0-rc3+ with mmap kernel patch v3
vmcore size: 2.0TB
Dump file size: 3.6GB
makedumpfile mmap branch with parameters: -c --message-level 23 -d 31 
--map-size <map-size>
All measured time from debug message of makedumpfile.

As a comparison, I also have tested with original kernel and original 
makedumpfile 1.5.1 and 1.5.3.
I added all [Excluding unnecessary pages] and [Excluding free pages] 
time together as "Filter Pages", and [Copyying Data] as "Copy data" here.

makedumjpfile	Kernel	map-size (KB)	Filter pages (s)	Copy data (s)	Total (s)
1.5.1	 3.7.0-0.36.el7.x86_64	N/A	940.28	1269.25	2209.53
1.5.3	 3.7.0-0.36.el7.x86_64	N/A	380.09	992.77	1372.86
1.5.3	v3.9-rc3	N/A	197.77	892.27	1090.04
1.5.3+mmap	v3.9-rc3+mmap	0	164.87	606.06	770.93
1.5.3+mmap	v3.9-rc3+mmap	4	88.62	576.07	664.69
1.5.3+mmap	v3.9-rc3+mmap	1024	83.66	477.23	560.89
1.5.3+mmap	v3.9-rc3+mmap	2048	83.44	477.21	560.65
1.5.3+mmap	v3.9-rc3+mmap	10240	83.84	476.56	560.4


Thanks,
Jingbai Ma

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: makedumpfile mmap() benchmark
  2013-03-27  5:51 ` makedumpfile mmap() benchmark Jingbai Ma
@ 2013-03-27  6:23   ` HATAYAMA Daisuke
  2013-03-27  6:35     ` Jingbai Ma
  0 siblings, 1 reply; 5+ messages in thread
From: HATAYAMA Daisuke @ 2013-03-27  6:23 UTC (permalink / raw)
  To: jingbai.ma
  Cc: kexec, linux-kernel, lisa.mitchell, kumagai-atsushi, ebiederm,
	akpm, cpw, vgoyal

From: Jingbai Ma <jingbai.ma@hp.com>
Subject: makedumpfile mmap() benchmark
Date: Wed, 27 Mar 2013 13:51:37 +0800

> Hi,
> 
> I have tested the makedumpfile mmap patch on a machine with 2TB
> memory, here is testing results:

Thanks for your benchmark. It's very helpful to see the benchmark on
different environments.

> Test environment:
> Machine: HP ProLiant DL980 G7 with 2TB RAM.
> CPU: Intel(R) Xeon(R) CPU E7- 2860  @ 2.27GHz (8 sockets, 10 cores)
> (Only 1 cpu was enabled the 2nd kernel)
> Kernel: 3.9.0-rc3+ with mmap kernel patch v3
> vmcore size: 2.0TB
> Dump file size: 3.6GB
> makedumpfile mmap branch with parameters: -c --message-level 23 -d 31
> --map-size <map-size>

To reduce the benchmark time, I recommend LZO or snappy compressions
rather than zlib. zlib is used when -c option is specified, and it's
too slow for use of crash dump.

To build makedumpfile with each compression format supports, do
USELZO=on or USESNAPPY=on after installing necessary libraries.

> All measured time from debug message of makedumpfile.
> 
> As a comparison, I also have tested with original kernel and original
> makedumpfile 1.5.1 and 1.5.3.
> I added all [Excluding unnecessary pages] and [Excluding free pages]
> time together as "Filter Pages", and [Copyying Data] as "Copy data"
> here.
> 
> makedumjpfile Kernel map-size (KB) Filter pages (s) Copy data (s)
> Total (s)
> 1.5.1	 3.7.0-0.36.el7.x86_64	N/A	940.28	1269.25	2209.53
> 1.5.3	 3.7.0-0.36.el7.x86_64	N/A	380.09	992.77	1372.86
> 1.5.3	v3.9-rc3	N/A	197.77	892.27	1090.04
> 1.5.3+mmap	v3.9-rc3+mmap	0	164.87	606.06	770.93
> 1.5.3+mmap	v3.9-rc3+mmap	4	88.62	576.07	664.69
> 1.5.3+mmap	v3.9-rc3+mmap	1024	83.66	477.23	560.89
> 1.5.3+mmap	v3.9-rc3+mmap	2048	83.44	477.21	560.65
> 1.5.3+mmap	v3.9-rc3+mmap	10240	83.84	476.56	560.4

Did you calculate "Filter pages" by adding two [Excluding unnecessary
pages] lines? The first one of the two line is displayed by
get_num_dumpable_cyclic() during the calculation of the total number
of dumpable pages, which is later used to print progress of writing
pages in percentage.

For example, here is the log, where the number of cycles is 3, and

mem_map (16399)
  mem_map    : ffffea0801e00000
  pfn_start  : 20078000
  pfn_end    : 20080000
read /proc/vmcore with mmap()
STEP [Excluding unnecessary pages] : 13.703842 seconds <-- this part is by get_num_dumpable_cyclic()
STEP [Excluding unnecessary pages] : 13.842656 seconds
STEP [Excluding unnecessary pages] : 6.857910 seconds
STEP [Excluding unnecessary pages] : 13.554281 seconds <-- this part is by the main filtering processing.
STEP [Excluding unnecessary pages] : 14.103593 seconds
STEP [Excluding unnecessary pages] : 7.114239 seconds
STEP [Copying data               ] : 138.442116 seconds
Writing erase info...
offset_eraseinfo: 1f4680e40, size_eraseinfo: 0

Original pages  : 0x000000001ffc28a4
<cut>

So, get_num_dumpable_cyclic() actually does filtering operation but it
should not be included here.

If so, I guess each measured time would be about 42 seconds, right?
Then, it's almost same as the result I posted today: 35 seconds.

Thanks.
HATAYAMA, Daisuke


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: makedumpfile mmap() benchmark
  2013-03-27  6:23   ` HATAYAMA Daisuke
@ 2013-03-27  6:35     ` Jingbai Ma
  0 siblings, 0 replies; 5+ messages in thread
From: Jingbai Ma @ 2013-03-27  6:35 UTC (permalink / raw)
  To: HATAYAMA Daisuke
  Cc: kexec, linux-kernel, lisa.mitchell, kumagai-atsushi, jingbai.ma,
	akpm, cpw, vgoyal, ebiederm

On 03/27/2013 02:23 PM, HATAYAMA Daisuke wrote:
> From: Jingbai Ma<jingbai.ma@hp.com>
> Subject: makedumpfile mmap() benchmark
> Date: Wed, 27 Mar 2013 13:51:37 +0800
>
>> Hi,
>>
>> I have tested the makedumpfile mmap patch on a machine with 2TB
>> memory, here is testing results:
>
> Thanks for your benchmark. It's very helpful to see the benchmark on
> different environments.

Thanks for your patch, there is a great performance improvement, very 
impressive!

>
>> Test environment:
>> Machine: HP ProLiant DL980 G7 with 2TB RAM.
>> CPU: Intel(R) Xeon(R) CPU E7- 2860  @ 2.27GHz (8 sockets, 10 cores)
>> (Only 1 cpu was enabled the 2nd kernel)
>> Kernel: 3.9.0-rc3+ with mmap kernel patch v3
>> vmcore size: 2.0TB
>> Dump file size: 3.6GB
>> makedumpfile mmap branch with parameters: -c --message-level 23 -d 31
>> --map-size<map-size>
>
> To reduce the benchmark time, I recommend LZO or snappy compressions
> rather than zlib. zlib is used when -c option is specified, and it's
> too slow for use of crash dump.

That's a very helpful suggestion, I will try it again with LZO/snappy 
lib again.

>
> To build makedumpfile with each compression format supports, do
> USELZO=on or USESNAPPY=on after installing necessary libraries.
>
>> All measured time from debug message of makedumpfile.
>>
>> As a comparison, I also have tested with original kernel and original
>> makedumpfile 1.5.1 and 1.5.3.
>> I added all [Excluding unnecessary pages] and [Excluding free pages]
>> time together as "Filter Pages", and [Copyying Data] as "Copy data"
>> here.
>>
>> makedumjpfile Kernel map-size (KB) Filter pages (s) Copy data (s)
>> Total (s)
>> 1.5.1	 3.7.0-0.36.el7.x86_64	N/A	940.28	1269.25	2209.53
>> 1.5.3	 3.7.0-0.36.el7.x86_64	N/A	380.09	992.77	1372.86
>> 1.5.3	v3.9-rc3	N/A	197.77	892.27	1090.04
>> 1.5.3+mmap	v3.9-rc3+mmap	0	164.87	606.06	770.93
>> 1.5.3+mmap	v3.9-rc3+mmap	4	88.62	576.07	664.69
>> 1.5.3+mmap	v3.9-rc3+mmap	1024	83.66	477.23	560.89
>> 1.5.3+mmap	v3.9-rc3+mmap	2048	83.44	477.21	560.65
>> 1.5.3+mmap	v3.9-rc3+mmap	10240	83.84	476.56	560.4
>
> Did you calculate "Filter pages" by adding two [Excluding unnecessary
> pages] lines? The first one of the two line is displayed by
> get_num_dumpable_cyclic() during the calculation of the total number
> of dumpable pages, which is later used to print progress of writing
> pages in percentage.
>
> For example, here is the log, where the number of cycles is 3, and
>
> mem_map (16399)
>    mem_map    : ffffea0801e00000
>    pfn_start  : 20078000
>    pfn_end    : 20080000
> read /proc/vmcore with mmap()
> STEP [Excluding unnecessary pages] : 13.703842 seconds<-- this part is by get_num_dumpable_cyclic()
> STEP [Excluding unnecessary pages] : 13.842656 seconds
> STEP [Excluding unnecessary pages] : 6.857910 seconds
> STEP [Excluding unnecessary pages] : 13.554281 seconds<-- this part is by the main filtering processing.
> STEP [Excluding unnecessary pages] : 14.103593 seconds
> STEP [Excluding unnecessary pages] : 7.114239 seconds
> STEP [Copying data               ] : 138.442116 seconds
> Writing erase info...
> offset_eraseinfo: 1f4680e40, size_eraseinfo: 0
>
> Original pages  : 0x000000001ffc28a4
> <cut>
>
> So, get_num_dumpable_cyclic() actually does filtering operation but it
> should not be included here.
>
> If so, I guess each measured time would be about 42 seconds, right?
> Then, it's almost same as the result I posted today: 35 seconds.

Yes, I added them together, the following is one dump message log:
<Log>
makedumpfile  -c --message-level 23 -d 31 --map-size 10240 /proc/vmcore 
/sysroot/var/crash/vmcore_10240

cyclic buffer size has been changed: 77661798 => 77661184
Excluding unnecessary pages        : [100 %] STEP [Excluding unnecessary 
pages] : 24.777717 seconds
Excluding unnecessary pages        : [100 %] STEP [Excluding unnecessary 
pages] : 17.291935 seconds
Excluding unnecessary pages        : [100 %] STEP [Excluding unnecessary 
pages] : 24.498559 seconds
Excluding unnecessary pages        : [100 %] STEP [Excluding unnecessary 
pages] : 17.278414 seconds
Copying data                       : [100 %] STEP [Copying data 
       ] : 476.563428 seconds


Original pages  : 0x000000001ffe874d
   Excluded pages   : 0x000000001f79429e
     Pages filled with zero  : 0x00000000002b4c9c
     Cache pages             : 0x00000000000493bc
     Cache pages + private   : 0x00000000000011f3
     User process data pages : 0x0000000000005c55
     Free pages              : 0x000000001f48f3fe
     Hwpoison pages          : 0x0000000000000000
   Remaining pages  : 0x00000000008544af
   (The number of pages is reduced to 1%.)
Memory Hole     : 0x000000001c0178b3
--------------------------------------------------
Total pages     : 0x000000003c000000
</Log>

>
> Thanks.
> HATAYAMA, Daisuke
>


-- 
Thanks,
Jingbai Ma

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 5+ messages in thread

* makedumpfile mmap() benchmark
@ 2013-05-03 19:10 Cliff Wickman
  2013-05-07  8:47 ` HATAYAMA Daisuke
  0 siblings, 1 reply; 5+ messages in thread
From: Cliff Wickman @ 2013-05-03 19:10 UTC (permalink / raw)
  To: kexec, linux-kernel
  Cc: lisa.mitchell, vgoyal, d.hatayama, kumagai-atsushi, ebiederm,
	jingbai.ma


> Jingbai Ma wote on 27 Mar 2013:
> I have tested the makedumpfile mmap patch on a machine with 2TB memory, 
> here is testing results:
> Test environment:
> Machine: HP ProLiant DL980 G7 with 2TB RAM.
> CPU: Intel(R) Xeon(R) CPU E7- 2860  @ 2.27GHz (8 sockets, 10 cores)
> (Only 1 cpu was enabled the 2nd kernel)
> Kernel: 3.9.0-rc3+ with mmap kernel patch v3
> vmcore size: 2.0TB
> Dump file size: 3.6GB
> makedumpfile mmap branch with parameters: -c --message-level 23 -d 31 
> --map-size <map-size>
> All measured time from debug message of makedumpfile.
> 
> As a comparison, I also have tested with original kernel and original 
> makedumpfile 1.5.1 and 1.5.3.
> I added all [Excluding unnecessary pages] and [Excluding free pages] 
> time together as "Filter Pages", and [Copyying Data] as "Copy data" here.
> 
> makedumjpfile	Kernel	map-size (KB)	Filter pages (s)	Copy data (s)	Total (s)
> 1.5.1	 3.7.0-0.36.el7.x86_64	N/A	940.28	1269.25	2209.53
> 1.5.3	 3.7.0-0.36.el7.x86_64	N/A	380.09	992.77	1372.86
> 1.5.3	v3.9-rc3	N/A	197.77	892.27	1090.04
> 1.5.3+mmap	v3.9-rc3+mmap	0	164.87	606.06	770.93
> 1.5.3+mmap	v3.9-rc3+mmap	4	88.62	576.07	664.69
> 1.5.3+mmap	v3.9-rc3+mmap	1024	83.66	477.23	560.89
> 1.5.3+mmap	v3.9-rc3+mmap	2048	83.44	477.21	560.65
> 1.5.3+mmap	v3.9-rc3+mmap	10240	83.84	476.56	560.4

I have also tested the makedumpfile mmap patch on a machine with 2TB memory, 
here are the results:
Test environment:
Machine: SGI UV1000 with 2TB RAM.
CPU: Intel(R) Xeon(R) CPU E7- 8837  @ 2.67GHz
(only 1 cpu was enabled in the 2nd kernel)
Kernel: 3.0.13 with mmap kernel patch v3 (I had to tweak the patch a bit)
vmcore size: 2.0TB
Dump file size: 3.6GB
makedumpfile mmap branch with parameters: -c --message-level 23 -d 31 
   --map-size <map-size>
All measured times are actual clock times.
All tests are noncyclic.   Crash kernel memory: crashkernel=512M

As did Jingbai Ma, I also tested with an unpatched kernel and
makedumpfile 1.5.1 and 1.5.3.  But they do 2 filtering scans: unnecessary
pages and free pages; here added together as filter pages time.

                                      Filter    Copy
makedumpfile Kernel	 map-size(KB) pages(s)	data(s) Total(s)
1.5.1	     3.0.13	   N/A	      671   	511    1182
1.5.3	     3.0.13	   N/A	      294       535     829
1.5.3+mmap   3.0.13+mmap     0	       54    	506   	560
1.5.3+mmap   3.0.13+mmap  4096	       40    	416	456
1.5.3+mmap   3.0.13+mmap 10240	       37	424	461

Using mmap for the copy data as well as for filtering pages did little:
1.5.3+mmap   3.0.13+mmap  4096	       37    	414	451

My results are quite similar to Jingbai Ma's.
The mmap patch to the kernel greatly speeds the filtering of pages, so
we at SGI would very much like to see this patch in the 3.10 kernel.
  http://marc.info/?l=linux-kernel&m=136627770125345&w=2

What puzzles me is that the patch greatly speeds the read's of /proc/vmcore
(where map-size is 0) as well as providing the mmap ability.  I can now
seek/read page structures almost as fast as mmap'ing and copying them.
(versus Jingbai Ma's results where mmap almost doubled the speed of reads)
I have put counters in to verify, and we are doing several million
seek/read's vs. a few thousand mmap's.  Yet the performance is similar
(54sec vs. 37sec, above). I can't rationalize that much improvement.

Thanks,
Cliff Wickman

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: makedumpfile mmap() benchmark
  2013-05-03 19:10 makedumpfile mmap() benchmark Cliff Wickman
@ 2013-05-07  8:47 ` HATAYAMA Daisuke
  0 siblings, 0 replies; 5+ messages in thread
From: HATAYAMA Daisuke @ 2013-05-07  8:47 UTC (permalink / raw)
  To: Cliff Wickman
  Cc: jingbai.ma, kexec, linux-kernel, lisa.mitchell, kumagai-atsushi,
	ebiederm, vgoyal

(2013/05/04 4:10), Cliff Wickman wrote:
> 
>> Jingbai Ma wote on 27 Mar 2013:
>> I have tested the makedumpfile mmap patch on a machine with 2TB memory,
>> here is testing results:
>> Test environment:
>> Machine: HP ProLiant DL980 G7 with 2TB RAM.
>> CPU: Intel(R) Xeon(R) CPU E7- 2860  @ 2.27GHz (8 sockets, 10 cores)
>> (Only 1 cpu was enabled the 2nd kernel)
>> Kernel: 3.9.0-rc3+ with mmap kernel patch v3
>> vmcore size: 2.0TB
>> Dump file size: 3.6GB
>> makedumpfile mmap branch with parameters: -c --message-level 23 -d 31
>> --map-size <map-size>
>> All measured time from debug message of makedumpfile.
>>
>> As a comparison, I also have tested with original kernel and original
>> makedumpfile 1.5.1 and 1.5.3.
>> I added all [Excluding unnecessary pages] and [Excluding free pages]
>> time together as "Filter Pages", and [Copyying Data] as "Copy data" here.
>>
>> makedumjpfile	Kernel	map-size (KB)	Filter pages (s)	Copy data (s)	Total (s)
>> 1.5.1	 3.7.0-0.36.el7.x86_64	N/A	940.28	1269.25	2209.53
>> 1.5.3	 3.7.0-0.36.el7.x86_64	N/A	380.09	992.77	1372.86
>> 1.5.3	v3.9-rc3	N/A	197.77	892.27	1090.04
>> 1.5.3+mmap	v3.9-rc3+mmap	0	164.87	606.06	770.93
>> 1.5.3+mmap	v3.9-rc3+mmap	4	88.62	576.07	664.69
>> 1.5.3+mmap	v3.9-rc3+mmap	1024	83.66	477.23	560.89
>> 1.5.3+mmap	v3.9-rc3+mmap	2048	83.44	477.21	560.65
>> 1.5.3+mmap	v3.9-rc3+mmap	10240	83.84	476.56	560.4
> 
> I have also tested the makedumpfile mmap patch on a machine with 2TB memory,
> here are the results:
> Test environment:
> Machine: SGI UV1000 with 2TB RAM.
> CPU: Intel(R) Xeon(R) CPU E7- 8837  @ 2.67GHz
> (only 1 cpu was enabled in the 2nd kernel)
> Kernel: 3.0.13 with mmap kernel patch v3 (I had to tweak the patch a bit)
> vmcore size: 2.0TB
> Dump file size: 3.6GB
> makedumpfile mmap branch with parameters: -c --message-level 23 -d 31
>     --map-size <map-size>
> All measured times are actual clock times.
> All tests are noncyclic.   Crash kernel memory: crashkernel=512M
> 
> As did Jingbai Ma, I also tested with an unpatched kernel and
> makedumpfile 1.5.1 and 1.5.3.  But they do 2 filtering scans: unnecessary
> pages and free pages; here added together as filter pages time.
> 
>                                        Filter    Copy
> makedumpfile Kernel	 map-size(KB) pages(s)	data(s) Total(s)
> 1.5.1	     3.0.13	   N/A	      671   	511    1182
> 1.5.3	     3.0.13	   N/A	      294       535     829
> 1.5.3+mmap   3.0.13+mmap     0	       54    	506   	560
> 1.5.3+mmap   3.0.13+mmap  4096	       40    	416	456
> 1.5.3+mmap   3.0.13+mmap 10240	       37	424	461
> 
> Using mmap for the copy data as well as for filtering pages did little:
> 1.5.3+mmap   3.0.13+mmap  4096	       37    	414	451
> 
> My results are quite similar to Jingbai Ma's.
> The mmap patch to the kernel greatly speeds the filtering of pages, so
> we at SGI would very much like to see this patch in the 3.10 kernel.
>    http://marc.info/?l=linux-kernel&m=136627770125345&w=2
> 
> What puzzles me is that the patch greatly speeds the read's of /proc/vmcore
> (where map-size is 0) as well as providing the mmap ability.  I can now
> seek/read page structures almost as fast as mmap'ing and copying them.
> (versus Jingbai Ma's results where mmap almost doubled the speed of reads)
> I have put counters in to verify, and we are doing several million
> seek/read's vs. a few thousand mmap's.  Yet the performance is similar
> (54sec vs. 37sec, above). I can't rationalize that much improvement.

The change between 1.5.3+mmap between 1.5.3 that might be affecting the
result I guess is the below only.

commit ba1fd638ac024d01f70b5d7e16f0978cff978c22
Author: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Date:   Wed Feb 20 20:13:07 2013 +0900

    [PATCH] Clean up readmem() by removing its recursive call.

In addition to your and Ma's results, my result also showed similar
result: 100 secs for read() and 70 secs for mmap() with 4KB map. See:
https://lkml.org/lkml/2013/3/26/914

So I think:

- the performance degradation not only had come from many
ioremap/iounmap calls but also from the way makedumpfile was implemented.

- The changes of makedumpfile that impacted performance gain are the
below two:
  - Implement 8-entry cache for readmem() by Petr Tesarik, and
  - The above clean up patch that removes unnecessary recursive call of
readmem().

- Even by these changes only, we can get enough performance gain.
Further, using mmap allows us to get the performance close to
kernel-side processing; this might be unnecessary in practice but might
be meaningful in kdump's design that uses user-space tools as a part of
framework.

-- 
Thanks.
HATAYAMA, Daisuke


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-05-07  8:48 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-05-03 19:10 makedumpfile mmap() benchmark Cliff Wickman
2013-05-07  8:47 ` HATAYAMA Daisuke
  -- strict thread matches above, loose matches on Subject: below --
2013-02-14 10:11 [PATCH 00/13] kdump, vmcore: support mmap() on /proc/vmcore HATAYAMA Daisuke
2013-03-27  5:51 ` makedumpfile mmap() benchmark Jingbai Ma
2013-03-27  6:23   ` HATAYAMA Daisuke
2013-03-27  6:35     ` Jingbai Ma

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox