* [PATCH 0/4] process memory footprint info in proc/<pid>/[s|p]maps v2
[not found] <20070819075410.411207640@mail.ustc.edu.cn>
@ 2007-08-19 7:54 ` Fengguang Wu
2007-08-20 3:15 ` Ray Lee
[not found] ` <20070819075547.445659254@mail.ustc.edu.cn>
` (3 subsequent siblings)
4 siblings, 1 reply; 7+ messages in thread
From: Fengguang Wu @ 2007-08-19 7:54 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matt Mackall, John Berthels, linux-kernel
Andrew,
Inspired by Matt Mackall's pagemap patches and ideas, I worked up these
textual interfaces that achieve the same goals. The patches run OK
under different sized reads.
1) Add PSS into the existing /proc/<pid>/smaps:
[PATCH 1/4] maps: PSS(proportional set size) accounting in smaps
2) Create /proc/<pid>/pmaps for page granularity mmap footprints:
[PATCH 2/4] maps: address based vma walking
[PATCH 3/4] maps: introduce generic_maps_open()
[PATCH 4/4] maps: /proc/<pid>/pmaps interface - memory maps in granularity of pages
fs/proc/base.c | 7
fs/proc/internal.h | 1
fs/proc/task_mmu.c | 359 +++++++++++++++++++++++++++++---------
fs/seq_file.c | 1
include/linux/proc_fs.h | 7
mm/mempolicy.c | 2
6 files changed, 288 insertions(+), 89 deletions(-)
In this new version, offset and len are changed to use hexical numbers.
Thank you,
Fengguang
--
An example output:
# cat /proc/`pidof init`/pmaps
00400000-00409000 r-xp 00000000 08:01 1058732 /sbin/init
0 8 YRAU___ 1
00608000-00609000 rw-p 00008000 08:01 1058732 /sbin/init
8 1 Y_A_P__ 1
00609000-0062a000 rw-p 00609000 00:00 0 [heap]
609 5 Y_A_P__ 1
2b7d6e277000-2b7d6e294000 r-xp 00000000 08:01 1563879 /lib/ld-2.6.so
0 1 YRAU___ 63
1 4 YRAU___ 41
5 2 YRAU___ 55
7 1 YRAU___ 50
8 1 YRAU___ 55
9 1 YRAU___ 81
a 1 YRAU___ 57
b 2 YRAU___ 55
d 2 YRAU___ 81
f 1 YRAU___ 55
10 1 YRAU___ 43
13 1 YRAU___ 81
14 1 YRAU___ 42
15 1 YRAU___ 81
16 1 YRAU___ 56
17 1 YRAU___ 81
1a 1 YRAU___ 55
2b7d6e294000-2b7d6e297000 rw-p 2b7d6e294000 00:00 0
2b7d6e294 3 Y_A_P__ 1
2b7d6e493000-2b7d6e495000 rw-p 0001c000 08:01 1563879 /lib/ld-2.6.so
1c 2 Y_A_P__ 1
2b7d6e495000-2b7d6e4d0000 r-xp 00000000 08:01 1400920 /lib/libsepol.so.1
0 1 YRAU___ 5
1 2 YRAU___ 3
3 1 YRAU___ 5
4 1 YRAU___ 3
30 1 YRAU___ 3
2b7d6e4d0000-2b7d6e6d0000 ---p 0003b000 08:01 1400920 /lib/libsepol.so.1
2b7d6e6d0000-2b7d6e6d1000 rw-p 0003b000 08:01 1400920 /lib/libsepol.so.1
3b 1 Y_A_P__ 1
2b7d6e6d1000-2b7d6e6db000 rw-p 2b7d6e6d1000 00:00 0
2b7d6e6db000-2b7d6e6f1000 r-xp 00000000 08:01 1400922 /lib/libselinux.so.1
0 1 YRAU___ 9
1 3 YRAU___ 5
4 1 YRAU___ 4
a 1 YRAU___ 4
c 1 YRAU___ 3
f 1 YRAU___ 4
10 1 YRAU___ 3
12 2 YRAU___ 4
2b7d6e6f1000-2b7d6e8f0000 ---p 00016000 08:01 1400922 /lib/libselinux.so.1
2b7d6e8f0000-2b7d6e8f2000 rw-p 00015000 08:01 1400922 /lib/libselinux.so.1
15 2 Y_A_P__ 1
2b7d6e8f2000-2b7d6e8f3000 rw-p 2b7d6e8f2000 00:00 0
2b7d6e8f2 1 Y_A_P__ 1
2b7d6e8f3000-2b7d6ea40000 r-xp 00000000 08:01 1564031 /lib/libc-2.6.so
0 2 YRAU___ 81
2 1 YRAU___ 80
3 1 YRAU___ 81
4 1 YRAU___ 72
5 1 YRAU___ 77
6 1 YRAU___ 79
7 1 YRAU___ 73
8 1 YRAU___ 79
9 1 YRAU___ 78
a 1 YRAU___ 77
b 1 YRAU___ 72
c 1 YRAU___ 75
d 1 YRAU___ 81
e 1 YRAU___ 72
f 1 YRAU___ 78
10 1 YRAU___ 81
11 1 YRAU___ 78
12 1 YRAU___ 80
13 1 YRAU___ 78
14 2 YRAU___ 81
16 1 YRAU___ 49
17 6 YRAU___ 41
1d 1 YRAU___ 80
2a 1 YRAU___ 44
2b 1 YRAU___ 69
2c 1 YRAU___ 28
31 1 YRAU___ 79
32 1 YRAU___ 49
33 1 YRAU___ 73
34 1 YRAU___ 57
35 1 YRAU___ 67
42 1 YRAU___ 77
43 3 YRAU___ 78
46 1 YRAU___ 77
47 1 YRAU___ 70
48 1 YRAU___ 15
4d 1 YRAU___ 78
5f 1 YRAU___ 78
60 1 YRAU___ 75
61 1 YRAU___ 72
62 1 YRAU___ 67
63 1 YRAU___ 59
68 1 YRAU___ 63
69 1 YRAU___ 61
6a 1 YRAU___ 63
6b 1 YRAU___ 75
6c 1 YRAU___ 76
6d 1 YRAU___ 81
6e 1 YRAU___ 77
6f 1 YRAU___ 79
70 1 YRAU___ 44
71 1 YRAU___ 79
72 1 YRAU___ 78
73 1 YRAU___ 79
74 1 YRAU___ 46
75 1 YRAU___ 78
76 1 YRAU___ 60
78 1 YRAU___ 79
79 1 YRAU___ 81
7a 1 YRAU___ 80
7e 1 YRAU___ 41
8a 1 YRAU___ 36
8b 1 YRAU___ 72
8c 1 YRAU___ 28
8d 1 YRAU___ 21
91 2 YRAU___ 19
99 1 YRAU___ 62
9a 1 YRAU___ 76
9b 1 YRAU___ 72
c4 1 YRAU___ 80
c5 1 YRAU___ 82
c6 1 YRAU___ 81
cb 1 YRAU___ 42
cc 1 YRAU___ 79
cd 1 YRAU___ 26
cf 1 YRAU___ 12
d0 1 YRAU___ 77
d1 1 YRAU___ 38
d2 1 YRAU___ 45
d3 1 YRAU___ 66
d4 1 YRAU___ 71
e0 1 YRAU___ 52
106 1 YRAU___ 33
107 1 YRAU___ 14
108 1 YRAU___ 63
109 1 YRAU___ 57
10b 1 YRAU___ 70
10c 1 YRAU___ 63
118 1 YRAU___ 65
119 1 YRAU___ 78
11a 1 YRAU___ 69
11b 1 YRAU___ 68
11e 2 YRAU___ 62
120 1 YRAU___ 74
121 1 YRAU___ 42
125 1 YRAU___ 17
2b7d6ea40000-2b7d6ec40000 ---p 0014d000 08:01 1564031 /lib/libc-2.6.so
2b7d6ec40000-2b7d6ec43000 r--p 0014d000 08:01 1564031 /lib/libc-2.6.so
14d 3 Y_A_P__ 1
2b7d6ec43000-2b7d6ec45000 rw-p 00150000 08:01 1564031 /lib/libc-2.6.so
150 2 Y_A_P__ 1
2b7d6ec45000-2b7d6ec4b000 rw-p 2b7d6ec45000 00:00 0
2b7d6ec45 6 Y_A_P__ 1
2b7d6ec4b000-2b7d6ec4d000 r-xp 00000000 08:01 1564037 /lib/libdl-2.6.so
0 1 YRAU___ 51
1 1 YRAU___ 39
2b7d6ec4d000-2b7d6ee4d000 ---p 00002000 08:01 1564037 /lib/libdl-2.6.so
2b7d6ee4d000-2b7d6ee4f000 rw-p 00002000 08:01 1564037 /lib/libdl-2.6.so
2 2 Y_A_P__ 1
2b7d6ee4f000-2b7d6ee50000 rw-p 2b7d6ee4f000 00:00 0
2b7d6ee4f 1 Y_A_P__ 1
7fff3c81d000-7fff3c832000 rw-p 7ffffffea000 00:00 0 [stack]
7fffffffa 1 Y_A_P__ 1
7fffffffc 2 Y_A_P__ 1
7fffffffe 1 YRA_P__ 1
7fff3c9fe000-7fff3ca00000 r-xp 7fff3c9fe000 00:00 0 [vdso]
0 1 YR_____ 55
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/4] maps: PSS(proportional set size) accounting in smaps
[not found] ` <20070819075547.445659254@mail.ustc.edu.cn>
@ 2007-08-19 7:54 ` Fengguang Wu
0 siblings, 0 replies; 7+ messages in thread
From: Fengguang Wu @ 2007-08-19 7:54 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matt Mackall, John Berthels, linux-kernel
[-- Attachment #1: smaps-pss.patch --]
[-- Type: text/plain, Size: 3497 bytes --]
The "proportional set size" (PSS) of a process is the count of pages it has in
memory, where each page is divided by the number of processes sharing it. So if
a process has 1000 pages all to itself, and 1000 shared with one other process,
its PSS will be 1500.
- lwn.net: "ELC: How much memory are applications really using?"
The PSS proposed by Matt Mackall is a very nice metic for measuring an process's
memory footprint. So collect and export it via /proc/<pid>/smaps.
Matt Mackall's pagemap/kpagemap and John Berthels's exmap can also do the job.
They are comprehensive tools. But for PSS, let's do it in the simple way.
Cc: John Berthels <jjberthels@gmail.com>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn>
---
fs/proc/task_mmu.c | 29 ++++++++++++++++++++++++++++-
1 file changed, 28 insertions(+), 1 deletion(-)
--- linux-2.6.23-rc2-mm2.orig/fs/proc/task_mmu.c
+++ linux-2.6.23-rc2-mm2/fs/proc/task_mmu.c
@@ -324,6 +324,27 @@ struct mem_size_stats
unsigned long private_clean;
unsigned long private_dirty;
unsigned long referenced;
+
+ /*
+ * Proportional Set Size(PSS): my share of RSS.
+ *
+ * PSS of a process is the count of pages it has in memory, where each
+ * page is divided by the number of processes sharing it. So if a
+ * process has 1000 pages all to itself, and 1000 shared with one other
+ * process, its PSS will be 1500. - Matt Mackall, lwn.net
+ */
+ u64 pss;
+ /*
+ * To keep (accumulated) division errors low, we adopt 64bit pss and
+ * use some low bits for division errors. So (pss >> PSS_ERROR_BITS)
+ * would be the real byte count.
+ *
+ * A shift of 12 before division means(assuming 4K page size):
+ * - 1M 3-user-pages add up to 8KB errors;
+ * - supports mapcount up to 2^24, or 16M;
+ * - supports PSS up to 2^52 bytes, or 4PB.
+ */
+#define PSS_ERROR_BITS 12
};
struct smaps_arg
@@ -341,6 +362,7 @@ static int smaps_pte_range(pmd_t *pmd, u
pte_t *pte, ptent;
spinlock_t *ptl;
struct page *page;
+ int mapcount;
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
for (; addr != end; pte++, addr += PAGE_SIZE) {
@@ -357,16 +379,19 @@ static int smaps_pte_range(pmd_t *pmd, u
/* Accumulate the size in pages that have been accessed. */
if (pte_young(ptent) || PageReferenced(page))
mss->referenced += PAGE_SIZE;
- if (page_mapcount(page) >= 2) {
+ mapcount = page_mapcount(page);
+ if (mapcount >= 2) {
if (pte_dirty(ptent))
mss->shared_dirty += PAGE_SIZE;
else
mss->shared_clean += PAGE_SIZE;
+ mss->pss += (PAGE_SIZE << PSS_ERROR_BITS) / mapcount;
} else {
if (pte_dirty(ptent))
mss->private_dirty += PAGE_SIZE;
else
mss->private_clean += PAGE_SIZE;
+ mss->pss += (PAGE_SIZE << PSS_ERROR_BITS);
}
}
pte_unmap_unlock(pte - 1, ptl);
@@ -395,6 +420,7 @@ static int show_smap(struct seq_file *m,
seq_printf(m,
"Size: %8lu kB\n"
"Rss: %8lu kB\n"
+ "Pss: %8lu kB\n"
"Shared_Clean: %8lu kB\n"
"Shared_Dirty: %8lu kB\n"
"Private_Clean: %8lu kB\n"
@@ -402,6 +428,7 @@ static int show_smap(struct seq_file *m,
"Referenced: %8lu kB\n",
(vma->vm_end - vma->vm_start) >> 10,
sarg.mss.resident >> 10,
+ (unsigned long)(sarg.mss.pss >> (10 + PSS_ERROR_BITS)),
sarg.mss.shared_clean >> 10,
sarg.mss.shared_dirty >> 10,
sarg.mss.private_clean >> 10,
--
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 2/4] maps: address based vma walking
[not found] ` <20070819075547.562443204@mail.ustc.edu.cn>
@ 2007-08-19 7:54 ` Fengguang Wu
0 siblings, 0 replies; 7+ messages in thread
From: Fengguang Wu @ 2007-08-19 7:54 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matt Mackall, Al Viro, John Berthels, linux-kernel
[-- Attachment #1: maps-enable-address-based-pos.patch --]
[-- Type: text/plain, Size: 5388 bytes --]
Split large vmas into page groups of proc_maps_private.batch_size bytes, and
iterate them one by one for seqfile->show.
This allows us to export large scale process address space information via the
seqfile interface. The old behavior of walking one vma at a time can be
achieved by setting the batching size to ~0UL.
The conversion to address based walking makes the code more
- clean: code size is one half
- fast: rbtree faster than lists
- stable: won't miss/dup vma in case of vma insertion/deletion
Cc: Matt Mackall <mpm@selenic.com>
Cc: Al Viro <viro@ftp.linux.org.uk>
Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn>
---
fs/proc/task_mmu.c | 101 ++++++++++++--------------------------
include/linux/proc_fs.h | 7 +-
mm/mempolicy.c | 2
3 files changed, 36 insertions(+), 74 deletions(-)
--- linux-2.6.23-rc2-mm2.orig/include/linux/proc_fs.h
+++ linux-2.6.23-rc2-mm2/include/linux/proc_fs.h
@@ -283,9 +283,10 @@ static inline struct proc_dir_entry *PDE
struct proc_maps_private {
struct pid *pid;
struct task_struct *task;
-#ifdef CONFIG_MMU
- struct vm_area_struct *tail_vma;
-#endif
+ struct mm_struct *mm;
+ /* walk min(batch_size, remaining_size_of(vma)) bytes at a time */
+ unsigned long batch_size;
+ unsigned long addr;
};
#endif /* _LINUX_PROC_FS_H */
--- linux-2.6.23-rc2-mm2.orig/mm/mempolicy.c
+++ linux-2.6.23-rc2-mm2/mm/mempolicy.c
@@ -1937,7 +1937,5 @@ out:
seq_putc(m, '\n');
kfree(md);
- if (m->count < m->size)
- m->version = (vma != priv->tail_vma) ? vma->vm_start : 0;
return 0;
}
--- linux-2.6.23-rc2-mm2.orig/fs/proc/task_mmu.c
+++ linux-2.6.23-rc2-mm2/fs/proc/task_mmu.c
@@ -115,99 +115,63 @@ static void pad_len_spaces(struct seq_fi
seq_printf(m, "%*c", len, ' ');
}
-static void vma_stop(struct proc_maps_private *priv, struct vm_area_struct *vma)
+static void *seek_vma_addr(struct seq_file *m,
+ struct vm_area_struct *vma, loff_t *pos)
{
- if (vma && vma != priv->tail_vma) {
- struct mm_struct *mm = vma->vm_mm;
- up_read(&mm->mmap_sem);
- mmput(mm);
+ struct proc_maps_private *priv = m->private;
+ unsigned long addr = *pos;
+
+ if (addr & 1) { /* time for next batch */
+ if (vma->vm_end - addr < priv->batch_size) {
+ vma = vma->vm_next;
+ if (!vma || vma == get_gate_vma(priv->task))
+ return NULL;
+ } else
+ addr = (addr + priv->batch_size) & ~1;
}
+ if (addr < vma->vm_start)
+ addr = vma->vm_start;
+
+ *pos = priv->addr = addr;
+ return vma;
}
static void *m_start(struct seq_file *m, loff_t *pos)
{
struct proc_maps_private *priv = m->private;
- unsigned long last_addr = m->version;
- struct mm_struct *mm;
- struct vm_area_struct *vma, *tail_vma = NULL;
- loff_t l = *pos;
-
- /* Clear the per syscall fields in priv */
- priv->task = NULL;
- priv->tail_vma = NULL;
-
- /*
- * We remember last_addr rather than next_addr to hit with
- * mmap_cache most of the time. We have zero last_addr at
- * the beginning and also after lseek. We will have -1 last_addr
- * after the end of the vmas.
- */
-
- if (last_addr == -1UL)
- return NULL;
+ struct vm_area_struct *vma;
+ priv->mm = NULL;
priv->task = get_pid_task(priv->pid, PIDTYPE_PID);
if (!priv->task)
return NULL;
- mm = get_task_mm(priv->task);
- if (!mm)
+ priv->mm = get_task_mm(priv->task);
+ if (!priv->mm)
return NULL;
- priv->tail_vma = tail_vma = get_gate_vma(priv->task);
- down_read(&mm->mmap_sem);
+ down_read(&priv->mm->mmap_sem);
- /* Start with last addr hint */
- if (last_addr && (vma = find_vma(mm, last_addr))) {
- vma = vma->vm_next;
- goto out;
- }
-
- /*
- * Check the vma index is within the range and do
- * sequential scan until m_index.
- */
- vma = NULL;
- if ((unsigned long)l < mm->map_count) {
- vma = mm->mmap;
- while (l-- && vma)
- vma = vma->vm_next;
- goto out;
- }
-
- if (l != mm->map_count)
- tail_vma = NULL; /* After gate vma */
-
-out:
- if (vma)
- return vma;
+ vma = find_vma(priv->mm, *pos);
+ if (!vma || vma == get_gate_vma(priv->task))
+ return NULL;
- /* End of vmas has been reached */
- m->version = (tail_vma != NULL)? 0: -1UL;
- up_read(&mm->mmap_sem);
- mmput(mm);
- return tail_vma;
+ return seek_vma_addr(m, vma, pos);
}
static void *m_next(struct seq_file *m, void *v, loff_t *pos)
{
- struct proc_maps_private *priv = m->private;
- struct vm_area_struct *vma = v;
- struct vm_area_struct *tail_vma = priv->tail_vma;
-
(*pos)++;
- if (vma && (vma != tail_vma) && vma->vm_next)
- return vma->vm_next;
- vma_stop(priv, vma);
- return (vma != tail_vma)? tail_vma: NULL;
+ return seek_vma_addr(m, v, pos);
}
static void m_stop(struct seq_file *m, void *v)
{
struct proc_maps_private *priv = m->private;
- struct vm_area_struct *vma = v;
-
- vma_stop(priv, vma);
+ if (priv->mm) {
+ up_read(&priv->mm->mmap_sem);
+ mmput(priv->mm);
+ }
if (priv->task)
put_task_struct(priv->task);
}
@@ -220,6 +184,7 @@ static int do_maps_open(struct inode *in
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (priv) {
priv->pid = proc_pid(inode);
+ priv->batch_size = ~0;
ret = seq_open(file, ops);
if (!ret) {
struct seq_file *m = file->private_data;
@@ -291,8 +256,6 @@ static int show_map(struct seq_file *m,
}
seq_putc(m, '\n');
- if (m->count < m->size) /* vma is copied successfully */
- m->version = (vma != get_gate_vma(task))? vma->vm_start: 0;
return 0;
}
--
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 3/4] maps: introduce generic_maps_open()
[not found] ` <20070819075547.674060890@mail.ustc.edu.cn>
@ 2007-08-19 7:54 ` Fengguang Wu
0 siblings, 0 replies; 7+ messages in thread
From: Fengguang Wu @ 2007-08-19 7:54 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matt Mackall, John Berthels, linux-kernel
[-- Attachment #1: maps-generic-open.patch --]
[-- Type: text/plain, Size: 1873 bytes --]
Introduce generic_maps_open(). It is an extended version of do_maps_open().
The new function supports batch_size and custom sized seqfile/private buffers.
This function will be reused by pmaps.
Cc: Matt Mackall <mpm@selenic.com>
Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn>
---
fs/proc/task_mmu.c | 51 ++++++++++++++++++++++++++++++-------------
1 file changed, 36 insertions(+), 15 deletions(-)
--- linux-2.6.23-rc2-mm2.orig/fs/proc/task_mmu.c
+++ linux-2.6.23-rc2-mm2/fs/proc/task_mmu.c
@@ -176,24 +176,45 @@ static void m_stop(struct seq_file *m, v
put_task_struct(priv->task);
}
+static int generic_maps_open(struct inode *inode, struct file *file,
+ struct seq_operations *ops, unsigned long batch_size,
+ int bufsize, int privsize)
+{
+ struct seq_file *m;
+ struct proc_maps_private *priv = NULL;
+ char *buf = NULL;
+ int ret = -ENOMEM;
+
+ priv = kzalloc(privsize, GFP_KERNEL);
+ if (!priv)
+ goto out;
+
+ buf = kmalloc(bufsize, GFP_KERNEL);
+ if (!buf)
+ goto out;
+
+ ret = seq_open(file, ops);
+ if (ret)
+ goto out;
+
+ m = file->private_data;
+ m->private = priv;
+ m->buf = buf;
+ m->size = bufsize;
+ priv->pid = proc_pid(inode);
+ priv->batch_size = batch_size;
+ return 0;
+out:
+ kfree(priv);
+ kfree(buf);
+ return ret;
+}
+
static int do_maps_open(struct inode *inode, struct file *file,
struct seq_operations *ops)
{
- struct proc_maps_private *priv;
- int ret = -ENOMEM;
- priv = kzalloc(sizeof(*priv), GFP_KERNEL);
- if (priv) {
- priv->pid = proc_pid(inode);
- priv->batch_size = ~0;
- ret = seq_open(file, ops);
- if (!ret) {
- struct seq_file *m = file->private_data;
- m->private = priv;
- } else {
- kfree(priv);
- }
- }
- return ret;
+ return generic_maps_open(inode, file, ops, ~0, 2 * PAGE_SIZE,
+ sizeof(struct proc_maps_private));
}
static int show_map(struct seq_file *m, void *v)
--
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 4/4] maps: /proc/<pid>/pmaps interface - memory maps in granularity of pages
[not found] ` <20070819075547.785390741@mail.ustc.edu.cn>
@ 2007-08-19 7:54 ` Fengguang Wu
0 siblings, 0 replies; 7+ messages in thread
From: Fengguang Wu @ 2007-08-19 7:54 UTC (permalink / raw)
To: Andrew Morton
Cc: Matt Mackall, Jeremy Fitzhardinge, David Rientjes, John Berthels,
Nick Piggin, linux-kernel
[-- Attachment #1: pmaps.patch --]
[-- Type: text/plain, Size: 9023 bytes --]
Show a process's page-by-page address space infomation in /proc/<pid>/pmaps.
It helps to analyze application memory footprints in a comprehensive way.
Pages share the same states are grouped into a page range.
For each page range, the following fields are exported:
- [HEX NUM] first page index
- [HEX NUM] number of pages in the range
- [STRING] well known page/pte flags
- [DEC NUM] number of mmap users
Only page flags not expected to disappear in the near future are exported:
Y:pteyoung R:referenced A:active U:uptodate P:ptedirty D:dirty W:writeback
Here is a sample output:
wfg ~% cat /proc/$$/pmaps
00400000-00492000 r-xp 00000000 08:01 1727526 /bin/zsh4
0 1 YRAU___ 7
2 16 YRAU___ 7
19 5 YRAU___ 7
20 18 YRAU___ 7
38 1 YRAU___ 6
39 9 YRAU___ 7
43 46 YRAU___ 7
00691000-00697000 rw-p 00091000 08:01 1727526 /bin/zsh4
91 6 Y_A_P__ 1
00697000-00b6e000 rw-p 00697000 00:00 0 [heap]
698 1 Y_A_P__ 1
69c 1 Y_A_P__ 1
69e 2 Y_A_P__ 1
6a6 1 Y_A_P__ 1
6a8 2 Y_A_P__ 1
6ad 4bc Y_A_P__ 1
b6a 2 Y_A_P__ 1
2b661b3ea000-2b661b407000 r-xp 00000000 08:01 1563879 /lib/ld-2.6.so
0 1 YRAU___ 78
1 4 YRAU___ 56
5 2 YRAU___ 70
7 1 YRAU___ 65
8 1 YRAU___ 70
9 1 YRAU___ 96
a 1 YRAU___ 72
b 2 YRAU___ 70
d 2 YRAU___ 96
f 1 YRAU___ 70
10 1 YRAU___ 58
11 1 YRAU___ 52
12 1 YRAU___ 19
13 1 YRAU___ 96
14 1 YRAU___ 57
15 1 YRAU___ 96
16 1 YRAU___ 71
17 1 YRAU___ 96
18 1 YRAU___ 52
1a 1 YRAU___ 70
2b661b407000-2b661b40a000 rw-p 2b661b407000 00:00 0
2b661b407 3 Y_A_P__ 1
2b661b606000-2b661b608000 rw-p 0001c000 08:01 1563879 /lib/ld-2.6.so
1c 2 Y_A_P__ 1
[...]
Matt Mackall's pagemap/kpagemap and John Berthels's exmap can achieve the same goals,
and probably more. But this text based pmaps interface should be more easy to use.
The concern of data set size is taken care of by working in a sparse way.
1) It will only generate output for resident pages, that normally is much
smaller than the mapped size. For example, the VSZ:RSS ratio of ALL
processes are 4516796KB:457048KB ~= 10:1.
2) The page range trick suppresses more output. For example, my
running firefox has a (RSS_pages:page_ranges) of 16k:2k ~= 8:1.
It's interesting to see that the seq_file interface demands some
more programming efforts, and provides such flexibility as well.
In the worst case of each resident page makes a line in pmaps, a 4GB RSS can
produce up to 1M pmaps lines, or about 20MB data.
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Matt Mackall <mpm@selenic.com>
Cc: John Berthels <jjberthels@gmail.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn>
---
fs/proc/base.c | 7 +
fs/proc/internal.h | 1
fs/proc/task_mmu.c | 180 +++++++++++++++++++++++++++++++++++++++++++
3 files changed, 188 insertions(+)
--- linux-2.6.23-rc2-mm2.orig/fs/proc/task_mmu.c
+++ linux-2.6.23-rc2-mm2/fs/proc/task_mmu.c
@@ -752,3 +752,183 @@ const struct file_operations proc_numa_m
.release = seq_release_private,
};
#endif /* CONFIG_NUMA */
+
+struct pmaps_private {
+ struct proc_maps_private pmp;
+ struct vm_area_struct *vma;
+ struct seq_file *m;
+ /* page range attrs */
+ unsigned long offset;
+ unsigned long len;
+ unsigned long flags;
+ int mapcount;
+};
+
+#define PMAPS_BUF_SIZE (64<<10) /* 64K */
+#define PMAPS_BATCH_SIZE (16<<20) /* 16M */
+
+#define PG_YOUNG PG_readahead /* reuse any non-relevant flag */
+#define PG_DIRTY PG_lru /* ditto */
+
+static unsigned long page_mask;
+
+static struct {
+ unsigned long mask;
+ const char *name;
+ bool faked;
+} page_flags [] = {
+ {1 << PG_YOUNG, "Y:pteyoung", 1},
+ {1 << PG_referenced, "R:referenced", 0},
+ {1 << PG_active, "A:active", 0},
+
+ {1 << PG_uptodate, "U:uptodate", 0},
+ {1 << PG_DIRTY, "P:ptedirty", 1},
+ {1 << PG_dirty, "D:dirty", 0},
+ {1 << PG_writeback, "W:writeback", 0},
+};
+
+static unsigned long pte_page_flags(pte_t ptent, struct page* page)
+{
+ unsigned long flags;
+
+ flags = page->flags & page_mask;
+
+ if (pte_young(ptent))
+ flags |= (1 << PG_YOUNG);
+
+ if (pte_dirty(ptent))
+ flags |= (1 << PG_DIRTY);
+
+ return flags;
+}
+
+static int pmaps_show_range(struct pmaps_private *pp)
+{
+ int i;
+
+ if (!pp->len)
+ return 0;
+
+ seq_printf(pp->m, "%lx\t%lx\t", pp->offset, pp->len);
+
+ for (i = 0; i < ARRAY_SIZE(page_flags); i++)
+ seq_putc(pp->m, (pp->flags & page_flags[i].mask) ?
+ page_flags[i].name[0] : '_');
+
+ return seq_printf(pp->m, "\t%d\n", pp->mapcount);
+}
+
+int pmaps_add_page(struct pmaps_private *pp, unsigned long offset,
+ unsigned long flags, int mapcount)
+{
+ int ret = 0;
+
+ if (offset == pp->offset + pp->len &&
+ flags == pp->flags &&
+ mapcount == pp->mapcount) {
+ pp->len++;
+ } else {
+ ret = pmaps_show_range(pp);
+ pp->offset = offset;
+ pp->len = 1;
+ pp->flags = flags;
+ pp->mapcount = mapcount;
+ }
+
+ return ret;
+}
+
+static int pmaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
+ void *private)
+{
+ struct pmaps_private *pp = private;
+ struct vm_area_struct *vma = pp->vma;
+ pte_t *pte, *apte, ptent;
+ spinlock_t *ptl;
+ struct page *page;
+ unsigned long offset;
+ unsigned long flags;
+ int mapcount;
+ int ret = 0;
+
+ apte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
+ for (; addr != end; pte++, addr += PAGE_SIZE) {
+ ptent = *pte;
+ if (!pte_present(ptent))
+ continue;
+
+ page = vm_normal_page(vma, addr, ptent);
+ if (!page)
+ continue;
+
+ offset = page_index(page);
+ mapcount = page_mapcount(page);
+ flags = pte_page_flags(ptent, page);
+
+ if (pmaps_add_page(pp, offset, flags, mapcount))
+ break;
+ }
+ pte_unmap_unlock(apte, ptl);
+ cond_resched();
+ return ret;
+}
+
+static struct mm_walk pmaps_walk = { .pmd_entry = pmaps_pte_range };
+static int show_pmaps(struct seq_file *m, void *v)
+{
+ struct vm_area_struct *vma = v;
+ struct pmaps_private *pp = m->private;
+ unsigned long addr = pp->pmp.addr;
+ unsigned long end;
+ int ret;
+
+ if (addr == vma->vm_start) {
+ ret = show_map(m, vma);
+ if (ret)
+ return ret;
+ }
+
+ end = vma->vm_end;
+ if (end - addr > PMAPS_BATCH_SIZE)
+ end = addr + PMAPS_BATCH_SIZE;
+
+ pp->m = m;
+ pp->vma = vma;
+ pp->len = 0;
+ walk_page_range(vma->vm_mm, addr, end, &pmaps_walk, pp);
+ pmaps_show_range(pp);
+
+ return 0;
+}
+
+static struct seq_operations proc_pid_pmaps_op = {
+ .start = m_start,
+ .next = m_next,
+ .stop = m_stop,
+ .show = show_pmaps
+};
+
+static int pmaps_open(struct inode *inode, struct file *file)
+{
+ return generic_maps_open(inode, file, &proc_pid_pmaps_op,
+ PMAPS_BATCH_SIZE, PMAPS_BUF_SIZE,
+ sizeof(struct pmaps_private));
+}
+
+const struct file_operations proc_pmaps_operations = {
+ .open = pmaps_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release_private,
+};
+
+static __init int task_mmu_init(void)
+{
+ int i;
+ for (page_mask = 0, i = 0; i < ARRAY_SIZE(page_flags); i++)
+ if (!page_flags[i].faked)
+ page_mask |= page_flags[i].mask;
+ return 0;
+}
+
+pure_initcall(task_mmu_init);
--- linux-2.6.23-rc2-mm2.orig/fs/proc/base.c
+++ linux-2.6.23-rc2-mm2/fs/proc/base.c
@@ -45,6 +45,11 @@
*
* Paul Mundt <paul.mundt@nokia.com>:
* Overall revision about smaps.
+ *
+ * ChangeLog:
+ * 15-Aug-2007
+ * Fengguang Wu <wfg@mail.ustc.edu.cn>:
+ * Page granularity mapping info in pmaps.
*/
#include <asm/uaccess.h>
@@ -2044,6 +2049,7 @@ static const struct pid_entry tgid_base_
#ifdef CONFIG_PROC_PAGE_MONITOR
REG("clear_refs", S_IWUSR, clear_refs),
REG("smaps", S_IRUGO, smaps),
+ REG("pmaps", S_IRUSR, pmaps),
REG("pagemap", S_IRUSR, pagemap),
#endif
#endif
@@ -2336,6 +2342,7 @@ static const struct pid_entry tid_base_s
#ifdef CONFIG_PROC_PAGE_MONITOR
REG("clear_refs", S_IWUSR, clear_refs),
REG("smaps", S_IRUGO, smaps),
+ REG("pmaps", S_IRUSR, pmaps),
REG("pagemap", S_IRUSR, pagemap),
#endif
#endif
--- linux-2.6.23-rc2-mm2.orig/fs/proc/internal.h
+++ linux-2.6.23-rc2-mm2/fs/proc/internal.h
@@ -50,6 +50,7 @@ extern loff_t mem_lseek(struct file * fi
extern const struct file_operations proc_maps_operations;
extern const struct file_operations proc_numa_maps_operations;
extern const struct file_operations proc_smaps_operations;
+extern const struct file_operations proc_pmaps_operations;
extern const struct file_operations proc_clear_refs_operations;
extern const struct file_operations proc_pagemap_operations;
--
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 0/4] process memory footprint info in proc/<pid>/[s|p]maps v2
2007-08-19 7:54 ` [PATCH 0/4] process memory footprint info in proc/<pid>/[s|p]maps v2 Fengguang Wu
@ 2007-08-20 3:15 ` Ray Lee
[not found] ` <20070820075515.GA10476@mail.ustc.edu.cn>
0 siblings, 1 reply; 7+ messages in thread
From: Ray Lee @ 2007-08-20 3:15 UTC (permalink / raw)
To: Fengguang Wu; +Cc: Andrew Morton, Matt Mackall, John Berthels, linux-kernel
On 8/19/07, Fengguang Wu <wfg@mail.ustc.edu.cn> wrote:
> Inspired by Matt Mackall's pagemap patches and ideas, I worked up these
> textual interfaces that achieve the same goals. The patches run OK
> under different sized reads.
[...]
> 2b7d6e8f3000-2b7d6ea40000 r-xp 00000000 08:01 1564031 /lib/libc-2.6.so
> 0 2 YRAU___ 81
> 2 1 YRAU___ 80
> 3 1 YRAU___ 81
> 4 1 YRAU___ 72
> 5 1 YRAU___ 77
> 6 1 YRAU___ 79
> 7 1 YRAU___ 73
> 8 1 YRAU___ 79
> 9 1 YRAU___ 78
> a 1 YRAU___ 77
> b 1 YRAU___ 72
> c 1 YRAU___ 75
> d 1 YRAU___ 81
> e 1 YRAU___ 72
> f 1 YRAU___ 78
> 10 1 YRAU___ 81
> 11 1 YRAU___ 78
> 12 1 YRAU___ 80
> 13 1 YRAU___ 78
> 14 2 YRAU___ 81
> 16 1 YRAU___ 49
> 17 6 YRAU___ 41
> 1d 1 YRAU___ 80
> 2a 1 YRAU___ 44
> 2b 1 YRAU___ 69
> 2c 1 YRAU___ 28
> 31 1 YRAU___ 79
> 32 1 YRAU___ 49
> 33 1 YRAU___ 73
> 34 1 YRAU___ 57
> 35 1 YRAU___ 67
> 42 1 YRAU___ 77
> 43 3 YRAU___ 78
> 46 1 YRAU___ 77
> 47 1 YRAU___ 70
> 48 1 YRAU___ 15
> 4d 1 YRAU___ 78
> 5f 1 YRAU___ 78
> 60 1 YRAU___ 75
> 61 1 YRAU___ 72
> 62 1 YRAU___ 67
> 63 1 YRAU___ 59
> 68 1 YRAU___ 63
> 69 1 YRAU___ 61
> 6a 1 YRAU___ 63
> 6b 1 YRAU___ 75
> 6c 1 YRAU___ 76
> 6d 1 YRAU___ 81
> 6e 1 YRAU___ 77
> 6f 1 YRAU___ 79
> 70 1 YRAU___ 44
> 71 1 YRAU___ 79
> 72 1 YRAU___ 78
> 73 1 YRAU___ 79
> 74 1 YRAU___ 46
> 75 1 YRAU___ 78
> 76 1 YRAU___ 60
> 78 1 YRAU___ 79
> 79 1 YRAU___ 81
> 7a 1 YRAU___ 80
> 7e 1 YRAU___ 41
> 8a 1 YRAU___ 36
> 8b 1 YRAU___ 72
> 8c 1 YRAU___ 28
> 8d 1 YRAU___ 21
> 91 2 YRAU___ 19
> 99 1 YRAU___ 62
> 9a 1 YRAU___ 76
> 9b 1 YRAU___ 72
> c4 1 YRAU___ 80
> c5 1 YRAU___ 82
> c6 1 YRAU___ 81
> cb 1 YRAU___ 42
> cc 1 YRAU___ 79
> cd 1 YRAU___ 26
> cf 1 YRAU___ 12
> d0 1 YRAU___ 77
> d1 1 YRAU___ 38
> d2 1 YRAU___ 45
> d3 1 YRAU___ 66
> d4 1 YRAU___ 71
> e0 1 YRAU___ 52
> 106 1 YRAU___ 33
> 107 1 YRAU___ 14
> 108 1 YRAU___ 63
> 109 1 YRAU___ 57
> 10b 1 YRAU___ 70
> 10c 1 YRAU___ 63
> 118 1 YRAU___ 65
> 119 1 YRAU___ 78
> 11a 1 YRAU___ 69
> 11b 1 YRAU___ 68
> 11e 2 YRAU___ 62
> 120 1 YRAU___ 74
> 121 1 YRAU___ 42
> 125 1 YRAU___ 17
Eh, I'd think that pivoting the data set would be a much more natural
(& therefore shorter) representation.
YRAU___ 0,2:81 2:80 3:81 4:72 5:77 6:79 7:73 8:79 9:78
...versus...
> 0 2 YRAU___ 81
> 2 1 YRAU___ 80
> 3 1 YRAU___ 81
> 4 1 YRAU___ 72
> 5 1 YRAU___ 77
> 6 1 YRAU___ 79
> 7 1 YRAU___ 73
> 8 1 YRAU___ 79
> 9 1 YRAU___ 78
So, flags followed by a list of offset[,length]:usage. If the flags
change, start a new line. If you get a line that's too long, start a
new line.
No?
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 0/4] process memory footprint info in proc/<pid>/[s|p]maps v2
[not found] ` <20070820075515.GA10476@mail.ustc.edu.cn>
@ 2007-08-20 7:55 ` Fengguang Wu
0 siblings, 0 replies; 7+ messages in thread
From: Fengguang Wu @ 2007-08-20 7:55 UTC (permalink / raw)
To: Ray Lee; +Cc: Andrew Morton, Matt Mackall, John Berthels, linux-kernel
On Sun, Aug 19, 2007 at 08:15:01PM -0700, Ray Lee wrote:
> On 8/19/07, Fengguang Wu <wfg@mail.ustc.edu.cn> wrote:
> > Inspired by Matt Mackall's pagemap patches and ideas, I worked up these
> > textual interfaces that achieve the same goals. The patches run OK
> > under different sized reads.
> [...]
> > 2b7d6e8f3000-2b7d6ea40000 r-xp 00000000 08:01 1564031 /lib/libc-2.6.so
> > 0 2 YRAU___ 81
> > 2 1 YRAU___ 80
> > 3 1 YRAU___ 81
> > 4 1 YRAU___ 72
> > 5 1 YRAU___ 77
> > 6 1 YRAU___ 79
> > 7 1 YRAU___ 73
> > 8 1 YRAU___ 79
> > 9 1 YRAU___ 78
> > a 1 YRAU___ 77
> > b 1 YRAU___ 72
> > c 1 YRAU___ 75
> > d 1 YRAU___ 81
> > e 1 YRAU___ 72
> > f 1 YRAU___ 78
> > 10 1 YRAU___ 81
> > 11 1 YRAU___ 78
> > 12 1 YRAU___ 80
> > 13 1 YRAU___ 78
> > 14 2 YRAU___ 81
> > 16 1 YRAU___ 49
> > 17 6 YRAU___ 41
> > 1d 1 YRAU___ 80
> > 2a 1 YRAU___ 44
> > 2b 1 YRAU___ 69
> > 2c 1 YRAU___ 28
> > 31 1 YRAU___ 79
> > 32 1 YRAU___ 49
> > 33 1 YRAU___ 73
> > 34 1 YRAU___ 57
> > 35 1 YRAU___ 67
> > 42 1 YRAU___ 77
> > 43 3 YRAU___ 78
> > 46 1 YRAU___ 77
> > 47 1 YRAU___ 70
> > 48 1 YRAU___ 15
> > 4d 1 YRAU___ 78
> > 5f 1 YRAU___ 78
> > 60 1 YRAU___ 75
> > 61 1 YRAU___ 72
> > 62 1 YRAU___ 67
> > 63 1 YRAU___ 59
> > 68 1 YRAU___ 63
> > 69 1 YRAU___ 61
> > 6a 1 YRAU___ 63
> > 6b 1 YRAU___ 75
> > 6c 1 YRAU___ 76
> > 6d 1 YRAU___ 81
> > 6e 1 YRAU___ 77
> > 6f 1 YRAU___ 79
> > 70 1 YRAU___ 44
> > 71 1 YRAU___ 79
> > 72 1 YRAU___ 78
> > 73 1 YRAU___ 79
> > 74 1 YRAU___ 46
> > 75 1 YRAU___ 78
> > 76 1 YRAU___ 60
> > 78 1 YRAU___ 79
> > 79 1 YRAU___ 81
> > 7a 1 YRAU___ 80
> > 7e 1 YRAU___ 41
> > 8a 1 YRAU___ 36
> > 8b 1 YRAU___ 72
> > 8c 1 YRAU___ 28
> > 8d 1 YRAU___ 21
> > 91 2 YRAU___ 19
> > 99 1 YRAU___ 62
> > 9a 1 YRAU___ 76
> > 9b 1 YRAU___ 72
> > c4 1 YRAU___ 80
> > c5 1 YRAU___ 82
> > c6 1 YRAU___ 81
> > cb 1 YRAU___ 42
> > cc 1 YRAU___ 79
> > cd 1 YRAU___ 26
> > cf 1 YRAU___ 12
> > d0 1 YRAU___ 77
> > d1 1 YRAU___ 38
> > d2 1 YRAU___ 45
> > d3 1 YRAU___ 66
> > d4 1 YRAU___ 71
> > e0 1 YRAU___ 52
> > 106 1 YRAU___ 33
> > 107 1 YRAU___ 14
> > 108 1 YRAU___ 63
> > 109 1 YRAU___ 57
> > 10b 1 YRAU___ 70
> > 10c 1 YRAU___ 63
> > 118 1 YRAU___ 65
> > 119 1 YRAU___ 78
> > 11a 1 YRAU___ 69
> > 11b 1 YRAU___ 68
> > 11e 2 YRAU___ 62
> > 120 1 YRAU___ 74
> > 121 1 YRAU___ 42
> > 125 1 YRAU___ 17
>
> Eh, I'd think that pivoting the data set would be a much more natural
> (& therefore shorter) representation.
>
> YRAU___ 0,2:81 2:80 3:81 4:72 5:77 6:79 7:73 8:79 9:78
> ...versus...
> > 0 2 YRAU___ 81
> > 2 1 YRAU___ 80
> > 3 1 YRAU___ 81
> > 4 1 YRAU___ 72
> > 5 1 YRAU___ 77
> > 6 1 YRAU___ 79
> > 7 1 YRAU___ 73
> > 8 1 YRAU___ 79
> > 9 1 YRAU___ 78
>
> So, flags followed by a list of offset[,length]:usage. If the flags
> change, start a new line. If you get a line that's too long, start a
> new line.
>
> No?
It's an interesting suggestion: the compression ratio could be doubled
on average. Thank you!
However I still appreciate the 'beauty' of tabbed output ;-)
I'm working on an interface with similar output(see filecache below). It is
less likely to be improved by the new format - and I want the two interfaces to
be consistent.
As for the format, I'm not quite sure of two things:
- should hex values be used for offset/len?
Hex values are less user friendly but more consistent with the format of
maps. For example, the 001bc000/1bc, 007d1000/7d1 below:
007bc000-007d1000 rw-p 001bc000 08:01 199284 /usr/bin/Xorg
^^^^^
1bc 1 YR_U___ 1
^^^
1bd 5 Y_A_P__ 1
1c3 3 Y_A_P__ 1
1c6 5 YR_U___ 1
1cb 1 Y_A_P__ 1
1cc 1 YR_U___ 1
1cd 4 Y_A_P__ 1
007d1000-0108a000 rw-p 007d1000 00:00 0 [heap]
^^^^^
7d1 4 Y_A_P__ 1
^^^
7d6 3 Y_A_P__ 1
7da 5 Y_A_P__ 1
7e0 41 Y_A_P__ 1
- should the page size be printed as a file header(offset/len are in page
units)? e.g.
# page size 4096
...
But it is also available via the getpagesize() call.
I'd appreciate it a lot if you or other people could give advices on them.
Thank you,
Fengguang
---
filecache: show cached files and their cached pages
===================================================
bash-3.2# cat /proc/filecache
# filecache 1.0
# ino cached cached% refcnt state dev file
0 568 0 0 -- 00:02(bdev) (03:00)
176359 1164 95 2 d- 03:00(hda) /lib/libc-2.3.6.so
176355 92 100 2 d- 03:00(hda) /lib/ld-2.3.6.so
96199 12 100 0 d- 03:00(hda) /bin/cat
32286 4 100 0 -- 03:00(hda) /etc/mtab
96194 40 100 0 d- 03:00(hda) /bin/mount
529061 4 100 0 -- 03:00(hda) /usr/share/terminfo/l/linux
48099 4 100 0 -- 03:00(hda) /.bash_history
176633 40 100 1 -- 03:00(hda) /lib/libnss_files-2.3.6.so
176635 36 100 1 -- 03:00(hda) /lib/libnss_nis-2.3.6.so
176630 76 100 1 -- 03:00(hda) /lib/libnsl-2.3.6.so
176631 36 100 1 -- 03:00(hda) /lib/libnss_compat-2.3.6.so
176362 12 100 1 -- 03:00(hda) /lib/libdl-2.3.6.so
176690 264 100 1 -- 03:00(hda) /lib/libncurses.so.5.5
96202 244 100 0 -- 03:00(hda) /bin/bash
73 568 100 1 -- 00:01(rootfs) /dev/root
bash-3.2# echo /lib/libc-2.3.6.so > /proc/filecache
bash-3.2# cat /proc/filecache
# file /lib/libc-2.3.6.so
# flags R:referenced A:active U:uptodate D:dirty W:writeback M:mmap
# idx len state refcnt
0 16 RAU__M 3
16 3 __U___ 1
19 1 R_U__M 2
1a 5 __U___ 1
1f 1 RAU__M 3
20 1 __U___ 1
21 1 R_U__M 2
22 2 RAU__M 3
24 1 R_U__M 2
25 1 __U___ 1
26 1 RAU__M 3
27 2 __U___ 1
29 2 RAU__M 2
2b 2 RAU__M 3
2d 2 R_U__M 2
2f 9 __U___ 1
38 1 R_U__M 2
39 1 __U___ 1
3a 5 RAU__M 2
3f 1 R_U__M 2
40 4 __U___ 1
44 1 RAU__M 2
45 3 __U___ 1
48 6 R_U__M 2
4e 4 __U___ 1
52 1 R_U__M 2
53 2 RAU__M 2
55 1 R_U__M 2
56 2 RAU__M 2
58 1 R_U__M 2
59 2 __U___ 1
5b 1 RAU__M 2
5c 1 R_U__M 2
5d 5 RAU__M 2
62 1 RAU__M 3
63 2 RAU__M 2
65 1 R_U___ 1
66 3 RAU__M 2
69 1 __U___ 1
6a 3 RAU__M 3
6d 1 R_U__M 2
6e 3 __U___ 1
71 1 R_U__M 2
72 c __U___ 1
7e 1 R_U__M 2
7f 9 __U___ 1
88 2 R_U__M 2
8a 1 __U___ 1
8b 1 R_U__M 2
8c 2 RAU__M 2
8e 1 R_U__M 2
8f 9 __U___ 1
9f f __U___ 1
ae 2 RAU__M 2
b0 9 __U___ 1
b9 2 RAU__M 3
bb 1 RAU__M 2
bc 4 __U___ 1
c0 2 RAU__M 3
c2 1 R_U__M 2
c3 2 __U___ 1
c5 1 RAU__M 2
c6 2 R_U__M 2
c8 1 RAU__M 3
c9 1 RAU__M 2
ca c __U___ 1
d6 1 RAU__M 3
d7 3 __U___ 1
da 3 R_U__M 2
dd f __U___ 1
f3 5 __U___ 1
f8 4 R_U__M 2
fc 1 __U___ 1
fd 2 R_U__M 2
ff 1 RAU__M 3
100 1 R_U__M 2
101 2 __U___ 1
103 1 RAU__M 3
104 5 __U___ 1
109 1 RAU__M 3
10a a __U___ 1
114 1 RAU__M 2
115 1 RAU__M 3
116 1 __U___ 1
117 1 RAU__M 2
118 2 R_U__M 2
11a 2 __U___ 1
11c 1 RAU__M 3
11d 1 RAU__M 2
11e 1 RAU__M 3
11f 2 R_U__M 2
121 1 __U___ 1
122 1 R_U__M 2
123 4 __U___ 1
127 4 RAU___ 1
12b 6 __U___ 1
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2007-08-20 7:55 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20070819075410.411207640@mail.ustc.edu.cn>
2007-08-19 7:54 ` [PATCH 0/4] process memory footprint info in proc/<pid>/[s|p]maps v2 Fengguang Wu
2007-08-20 3:15 ` Ray Lee
[not found] ` <20070820075515.GA10476@mail.ustc.edu.cn>
2007-08-20 7:55 ` Fengguang Wu
[not found] ` <20070819075547.445659254@mail.ustc.edu.cn>
2007-08-19 7:54 ` [PATCH 1/4] maps: PSS(proportional set size) accounting in smaps Fengguang Wu
[not found] ` <20070819075547.562443204@mail.ustc.edu.cn>
2007-08-19 7:54 ` [PATCH 2/4] maps: address based vma walking Fengguang Wu
[not found] ` <20070819075547.674060890@mail.ustc.edu.cn>
2007-08-19 7:54 ` [PATCH 3/4] maps: introduce generic_maps_open() Fengguang Wu
[not found] ` <20070819075547.785390741@mail.ustc.edu.cn>
2007-08-19 7:54 ` [PATCH 4/4] maps: /proc/<pid>/pmaps interface - memory maps in granularity of pages Fengguang Wu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).