public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Martin J. Bligh" <Martin.Bligh@us.ibm.com>
To: davidm@hpl.hp.com, Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
Cc: Andi Kleen <ak@suse.de>, linux-kernel@vger.kernel.org
Subject: Re: [RFC] 4KB stack + irq stack for x86
Date: Thu, 06 Jun 2002 15:10:55 -0700	[thread overview]
Message-ID: <88670000.1023401455@flay> (raw)
In-Reply-To: <15615.53702.794957.958227@napali.hpl.hp.com>

> We don't do anything special.  I'm not sure what the fragmentation
> statistics look like on machines with 1+GB memory; it's something I
> have been wondering about and hoping to look into at some point (if
> someone has done that already, I'd love to see the results).  In
> practice, every ia64 linux distro as of today ships with 16KB page
> size, so you only get order-1 allocations for stacks.

I mailed out a patch that creates /proc/buddyinfo, which should give
you frag stats very easily .... would be interested to know if that works
on your machine ... slightly updated patch against 2.4.19-pre10 below:

diff -urN virgin-2.4.19-pre10/fs/proc/proc_misc.c linux-2.4.19-pre10-buddyinfo/fs/proc/proc_misc.c
--- virgin-2.4.19-pre10/fs/proc/proc_misc.c	Wed Jun  5 16:32:15 2002
+++ linux-2.4.19-pre10-buddyinfo/fs/proc/proc_misc.c	Wed Jun  5 16:56:05 2002
@@ -213,6 +213,21 @@
 #undef K
 }
 
+extern int buddyinfo(char *buf, int node_id);
+
+int buddyinfo_read_proc(char *page, char **start, off_t off,
+		        int count, int *eof, void *data)
+{
+        int node_id;
+	int len = 0;
+
+	for (node_id = 0; node_id < numnodes; node_id++) {
+		len += buddyinfo(page+len, node_id);
+	} 
+	
+	return proc_calc_metrics(page, start, off, count, eof, len);
+}
+
 static int version_read_proc(char *page, char **start, off_t off,
 				 int count, int *eof, void *data)
 {
@@ -589,6 +604,8 @@
 		entry->proc_fops = &proc_kmsg_operations;
 	create_seq_entry("cpuinfo", 0, &proc_cpuinfo_operations);
 	create_seq_entry("slabinfo",S_IWUSR|S_IRUGO,&proc_slabinfo_operations);
+	create_proc_read_entry("buddyinfo", S_IWUSR | S_IRUGO, NULL,
+				       buddyinfo_read_proc, NULL);
 #ifdef CONFIG_MODULES
 	create_seq_entry("ksyms", 0, &proc_ksyms_operations);
 #endif
diff -urN virgin-2.4.19-pre10/mm/page_alloc.c linux-2.4.19-pre10-buddyinfo/mm/page_alloc.c
--- virgin-2.4.19-pre10/mm/page_alloc.c	Wed Jun  5 16:32:33 2002
+++ linux-2.4.19-pre10-buddyinfo/mm/page_alloc.c	Wed Jun  5 16:57:17 2002
@@ -853,3 +853,39 @@
 }
 
 __setup("memfrac=", setup_mem_frac);
+
+
+/* 
+ * This walks the freelist for each zone. Whilst this is slow, I'd rather 
+ * be slow here than slow down the fast path by keeping stats - mjbligh
+ */
+int buddyinfo(char *buf, int node_id)
+{
+	int zone_id, order, free, len = 0;
+	unsigned long flags;
+	zone_t *zone;
+	free_area_t * area;
+	struct list_head *head, *curr;
+	
+	for (zone_id = 0; zone_id < MAX_NR_ZONES; ++zone_id) {
+		zone = &(NODE_DATA(node_id)->node_zones[zone_id]);
+		if (zone->size == 0)
+			continue;
+		spin_lock_irqsave(&zone->lock, flags);
+		len += sprintf(buf+len, "Node %d, Zone %8s, ", 
+				node_id, zone->name);
+		for (order = 0; order < MAX_ORDER; ++order) {
+			area = zone->free_area + order;
+			head = &area->free_list;
+			free = 0;
+			for (curr = head->next; curr != head; curr = curr->next)
+				++free;
+			len += sprintf(buf+len, "%d ", free);
+		}	
+		len += sprintf(buf+len, "\n");
+		spin_unlock_irqrestore(&zone->lock, flags);
+	}
+
+	return len;
+}
+


  reply	other threads:[~2002-06-06 22:11 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-06-06 20:55 [RFC] 4KB stack + irq stack for x86 Ulrich Weigand
2002-06-06 21:19 ` David Mosberger
2002-06-06 22:10   ` Martin J. Bligh [this message]
2002-06-07  0:59   ` William Lee Irwin III
     [not found] <OF70FD985F.A9C66B00-ONC1256BD0.0069C993@de.ibm.com.suse.lists.linux.kernel>
2002-06-06 19:49 ` Andi Kleen
2002-06-06 20:27   ` David Mosberger
2002-06-07 11:28   ` Maciej W. Rozycki
  -- strict thread matches above, loose matches on Subject: below --
2002-06-06 19:24 Ulrich Weigand
     [not found] <mailman.1023370621.16639.linux-kernel2news@redhat.com>
2002-06-06 17:37 ` Pete Zaitcev
2002-06-07  1:06   ` David S. Miller
2002-06-06 13:32 Ulrich Weigand
     [not found] <20020604225539.F9111@redhat.com.suse.lists.linux.kernel>
     [not found] ` <Pine.LNX.4.44.0206050820100.2941-100000@home.transmeta.com.suse.lists.linux.kernel>
     [not found]   ` <20020605144357.A4697@redhat.com.suse.lists.linux.kernel>
2002-06-05 20:40     ` Andi Kleen
2002-06-05 20:55       ` Linus Torvalds
2002-06-05  2:55 Benjamin LaHaise
2002-06-05 15:33 ` Linus Torvalds
2002-06-05 18:43   ` Benjamin LaHaise
2002-06-05 18:53     ` Linus Torvalds
2002-06-05 21:07       ` Benjamin LaHaise
2002-06-05 22:15 ` Steve Lord
2002-06-05 22:31   ` Benjamin LaHaise
2002-06-05 23:13     ` David S. Miller
2002-06-06  0:24       ` Larry McVoy
2002-06-06  1:15       ` Andi Kleen
2002-06-02 15:52         ` Pavel Machek
2002-06-09 18:50           ` Andi Kleen
2002-06-06  1:42         ` Benjamin LaHaise
2002-06-06  2:30           ` Stephen Lord

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=88670000.1023401455@flay \
    --to=martin.bligh@us.ibm.com \
    --cc=Ulrich.Weigand@de.ibm.com \
    --cc=ak@suse.de \
    --cc=davidm@hpl.hp.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox