* [patch 2.5.8] bounce/swap stats
@ 2002-04-12 1:21 Randy.Dunlap
2002-04-12 2:15 ` Marcelo Tosatti
0 siblings, 1 reply; 3+ messages in thread
From: Randy.Dunlap @ 2002-04-12 1:21 UTC (permalink / raw)
To: linux-kernel; +Cc: axboe, andrea
[-- Attachment #1: Type: TEXT/PLAIN, Size: 4290 bytes --]
Hi,
This patch adds stats for all bounce I/O and bounce swap I/O
to /proc/stats .
I've been testing bounce I/O and VM performance in 2.4.teens
with the highio patch and in 2.5.x.
Summary:
* 2.5.8-pre3 with highio runs to completion with an intense workload
* 2.5.8-pre3 with "nohighio" and same workload dies
* 2.5.8-pre3 with "nohighio" and less workload runs
[attachments contain /proc/stat for completed runs]
Here's the patch. Jens, please apply to 2.5.8-N.
--- ./fs/proc/proc_misc.c.org Thu Jan 3 09:16:31 2002
+++ ./fs/proc/proc_misc.c Tue Jan 8 16:12:56 2002
@@ -324,6 +324,12 @@
xtime.tv_sec - jif / HZ,
total_forks);
+ len += sprintf(page + len,
+ "bounce_io %u %u\n"
+ "bounce_swap_io %u %u\n",
+ kstat.bouncein, kstat.bounceout,
+ kstat.bounceswapin, kstat.bounceswapout);
+
return proc_calc_metrics(page, start, off, count, eof, len);
}
--- ./mm/highmem.c.org Thu Jan 3 09:16:31 2002
+++ ./mm/highmem.c Tue Jan 8 16:16:51 2002
@@ -21,6 +21,7 @@
#include <linux/mempool.h>
#include <linux/blkdev.h>
#include <asm/pgalloc.h>
+#include <linux/kernel_stat.h>
static mempool_t *page_pool, *isa_page_pool;
@@ -401,7 +401,10 @@
vfrom = kmap(from->bv_page) + from->bv_offset;
memcpy(vto, vfrom, to->bv_len);
kunmap(from->bv_page);
+ kstat.bounceout++;
}
+ else
+ kstat.bouncein++;
}
/*
--- ./include/linux/kernel_stat.h.org Thu Jan 3 09:28:04 2002
+++ ./include/linux/kernel_stat.h Tue Jan 8 16:10:20 2002
@@ -26,6 +26,8 @@
unsigned int dk_drive_wblk[DK_MAX_MAJOR][DK_MAX_DISK];
unsigned int pgpgin, pgpgout;
unsigned int pswpin, pswpout;
+ unsigned int bouncein, bounceout;
+ unsigned int bounceswapin, bounceswapout;
#if !defined(CONFIG_ARCH_S390)
unsigned int irqs[NR_CPUS][NR_IRQS];
#endif
--- ./mm/page_io.c.orig Tue Apr 9 14:54:02 2002
+++ ./mm/page_io.c Tue Apr 9 16:18:18 2002
@@ -10,11 +10,13 @@
* Always use brw_page, life becomes simpler. 12 May 1998 Eric Biederman
*/
+#include <linux/config.h>
#include <linux/mm.h>
#include <linux/kernel_stat.h>
#include <linux/swap.h>
#include <linux/locks.h>
#include <linux/swapctl.h>
+#include <linux/blkdev.h>
#include <asm/pgtable.h>
@@ -41,6 +43,7 @@
int block_size;
struct inode *swapf = 0;
struct block_device *bdev;
+ kdev_t kdev;
if (rw == READ) {
ClearPageUptodate(page);
@@ -54,6 +57,7 @@
zones[0] = offset;
zones_used = 1;
block_size = PAGE_SIZE;
+ kdev = swapf->i_rdev;
} else {
int i, j;
unsigned int block = offset
@@ -67,6 +71,19 @@
}
zones_used = i;
bdev = swapf->i_sb->s_bdev;
+ kdev = swapf->i_sb->s_dev;
+ }
+
+ {
+ request_queue_t *q = blk_get_queue(kdev); /* TBD: is kdev always correct here? */
+ zone_t *zone = page_zone(page);
+ if (q && (page - zone->zone_mem_map) + (zone->zone_start_paddr
+ >> PAGE_SHIFT) >= q->bounce_pfn) {
+ if (rw == WRITE)
+ kstat.bounceswapout++;
+ else
+ kstat.bounceswapin++;
+ }
}
/* block_size == PAGE_SIZE/zones_used */
I'll keep looking into the "kernel dies" problem(s) that I'm
seeing [using more tools], but I have some data and a patch for 2.5.8
concerning bounce I/O and bounce swap statistics that I would
like to have integrated so that both users and developers
can have more insight into how much bounce I/O is happening.
I'll generate the patch for 2.4.teens + highmem if anyone
is interested in it, or after highmem is merged into 2.4.
...it will be added to 2.4, right?
There is a second patch (attached) that prints the device
major:minor of devices that are being used for bounce I/O
[258-bounce-identify.patch]. This is a user helper, not
intended for kernel inclusion.
Some of the symptoms that I usually see with the most memory-intensive
workloads are:
. 'top' reports that all 8 processors are in system exec. state
at 98-99% level and 'top' display is only being updated every
few minutes (should be updated every 1 second)
. Magic SysRq does not work when all 8 CPUs are tied up in system
mode
. there is a looping script running (with 'sleep 1') that
prints the last 50 lines of 'dmesg', but it often doesn't print
for 10-20 minutes and then finally comes back to life
. I usually don't see a kernel death, just lack of response or
my sessions to the test system dies.
Comments?
Thanks,
~Randy
[-- Attachment #2: Type: TEXT/PLAIN, Size: 637 bytes --]
--- linux/mm/highmem.c.IDDEV Tue Apr 9 14:55:43 2002
+++ linux/mm/highmem.c Tue Apr 9 15:28:12 2002
@@ -347,6 +347,8 @@
return __bounce_end_io_read(bio, isa_page_pool);
}
+static int bmsg_count; /* for printk rate throttling */
+
void create_bounce(unsigned long pfn, int gfp, struct bio **bio_orig)
{
struct page *page;
@@ -449,6 +451,12 @@
bio->bi_private = *bio_orig;
*bio_orig = bio;
+
+ bmsg_count++;
+ if ((bmsg_count % 10000) == 1)
+ printk(KERN_INFO "bounce_io (%c) for %d:%d\n",
+ (rw & WRITE) ? 'W' : 'R',
+ major(bio->bi_dev), minor(bio->bi_dev));
}
#if CONFIG_DEBUG_HIGHMEM
[-- Attachment #3: Type: TEXT/PLAIN, Size: 1653 bytes --]
### ./bashmemrun.1.times
14.09user 312.70system 3:31:25elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (464630major+45minor)pagefaults 0swaps
### ./bashmemrun.2.times
27.12user 75.72system 3:51:24elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (828745major+43minor)pagefaults 0swaps
### ./bashmemrun.3.times
3.10user 39.28system 3:58:33elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (188143major+40minor)pagefaults 0swaps
### ./bashmemrun.4.times
8.00user 186.14system 3:37:15elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (351732major+40minor)pagefaults 0swaps
### ./bashmemrun.5.times
21.21user 956.31system 3:34:02elapsed 7%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (458937major+45minor)pagefaults 0swaps
### ./bashmemrun.6.times
15.09user 242.06system 3:40:08elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (434785major+40minor)pagefaults 0swaps
/proc/stat: (selected parts; complete files available)
================
BEFORE WORKLOAD
================
cpu 333988 1699 480191 3038026
page 332236 31303253
swap 833 2541
disk_io: (8,0):(2449983,50735,619956,2399248,62510882) (8,1):(7770,2265,44540,5505,101928)
ctxt 803075
btime 1018395021
processes 8990
bounce_io 0 0
bounce_swap_io 0 0
================
AFTER WORKLOAD
================
cpu 366303 2387 9979922 4968500
page 3492460 55266961
swap 628925 1623359
disk_io: (8,0):(4170592,224001,4903460,3946591,101602466) (8,1):(338242,51221,2087732,287021,8965304)
ctxt 2328850
btime 1018395021
processes 11729
bounce_io 0 0
bounce_swap_io 0 0
[-- Attachment #4: Type: TEXT/PLAIN, Size: 1641 bytes --]
### ./bashmemrun.1.times
18.12user 207.73system 9:39.92elapsed 38%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (476954major+46minor)pagefaults 0swaps
### ./bashmemrun.2.times
2.33user 15.45system 0:32.64elapsed 54%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (83679major+23minor)pagefaults 0swaps
### ./bashmemrun.3.times
24.10user 382.99system 34:47.04elapsed 19%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (523278major+43minor)pagefaults 0swaps
### ./bashmemrun.4.times
4.00user 117.27system 26:45.54elapsed 7%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (170572major+48minor)pagefaults 0swaps
### ./bashmemrun.5.times
10.78user 200.86system 28:12.76elapsed 12%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (303228major+50minor)pagefaults 0swaps
### ./bashmemrun.6.times
7.03user 151.76system 5:26.88elapsed 48%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (368442major+43minor)pagefaults 0swaps
/proc/stat: (selected parts; complete files available)
================
BEFORE WORKLOAD
================
cpu 673 38 1221 227444
page 28512 19497
swap 8 0
disk_io: (8,0):(1618,1334,26132,284,8178) (8,1):(3672,1419,30892,2253,30824)
ctxt 22471
btime 1018542989
processes 1610
bounce_io 6180 968
bounce_swap_io 0 0
================
AFTER WORKLOAD
================
cpu 53618 856 852548 997122
page 3761092 11439113
swap 603415 529132
disk_io: (8,0):(936404,308004,6818996,628400,21985122) (8,1):(39420,17407,708132,22013,910720)
ctxt 1673605
btime 1018542989
processes 4946
bounce_io 826235 2265641
bounce_swap_io 547196 235096
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [patch 2.5.8] bounce/swap stats
2002-04-12 1:21 [patch 2.5.8] bounce/swap stats Randy.Dunlap
@ 2002-04-12 2:15 ` Marcelo Tosatti
2002-04-12 5:40 ` Jens Axboe
0 siblings, 1 reply; 3+ messages in thread
From: Marcelo Tosatti @ 2002-04-12 2:15 UTC (permalink / raw)
To: Randy.Dunlap; +Cc: linux-kernel, axboe, andrea
On Thu, 11 Apr 2002, Randy.Dunlap wrote:
> I'll generate the patch for 2.4.teens + highmem if anyone
> is interested in it, or after highmem is merged into 2.4.
> ...it will be added to 2.4, right?
highmem IO will be merged in 2.4.20pre1.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [patch 2.5.8] bounce/swap stats
2002-04-12 2:15 ` Marcelo Tosatti
@ 2002-04-12 5:40 ` Jens Axboe
0 siblings, 0 replies; 3+ messages in thread
From: Jens Axboe @ 2002-04-12 5:40 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: Randy.Dunlap, linux-kernel, andrea
On Thu, Apr 11 2002, Marcelo Tosatti wrote:
> On Thu, 11 Apr 2002, Randy.Dunlap wrote:
>
> > I'll generate the patch for 2.4.teens + highmem if anyone
> > is interested in it, or after highmem is merged into 2.4.
> > ...it will be added to 2.4, right?
>
> highmem IO will be merged in 2.4.20pre1.
Thanks Marcelo, I'll personally polish a version for you once 2.4.19
final is out.
--
Jens Axboe
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2002-04-12 5:45 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-04-12 1:21 [patch 2.5.8] bounce/swap stats Randy.Dunlap
2002-04-12 2:15 ` Marcelo Tosatti
2002-04-12 5:40 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox