From: Tino Reichardt <list-jfs@mcmilk.de>
To: jfs-discussion@lists.sourceforge.net,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-wireless@vger.kernel.org" <linux-wireless@vger.kernel.org>
Subject: Re: [Jfs-discussion] Out of memory on 3.5 kernels
Date: Thu, 1 Nov 2012 19:04:20 +0100 [thread overview]
Message-ID: <20121101180420.GA24922@mcmilk.de> (raw)
In-Reply-To: <20121030103535.GA10526@schottelius.org>
* Nico Schottelius <nico-kernel20120920@schottelius.org> wrote:
> Good morning,
>
> update: this problem still exists on 3.6.2-1-ARCH and it got worse:
>
> I reformatted the external disk to use xfs, but as the my
> root filesystem is still jfs, it still appears:
>
> Active / Total Objects (% used) : 642732 / 692268 (92.8%)
> Active / Total Slabs (% used) : 24801 / 24801 (100.0%)
> Active / Total Caches (% used) : 79 / 111 (71.2%)
> Active / Total Size (% used) : 603522.30K / 622612.05K (96.9%)
> Minimum / Average / Maximum Object : 0.01K / 0.90K / 15.25K
>
> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
> 475548 467649 98% 1.21K 18722 26 599104K jfs_ip
> 25670 19143 74% 0.05K 302 85 1208K shared_policy_node
> 24612 16861 68% 0.19K 1172 21 4688K dentry
> 24426 19524 79% 0.17K 1062 23 4248K vm_area_struct
> 21636 21180 97% 0.11K 601 36 2404K sysfs_dir_cache
> 12352 9812 79% 0.06K 193 64 772K kmalloc-64
> 11684 9145 78% 0.09K 254 46 1016K anon_vma
> 9855 8734 88% 0.58K 365 27 5840K inode_cache
> 9728 9281 95% 0.01K 19 512 76K kmalloc-8
> 8932 4411 49% 0.55K 319 28 5104K radix_tree_node
> 6336 5760 90% 0.25K 198 32 1584K kmalloc-256
> 5632 5632 100% 0.02K 22 256 88K kmalloc-16
> 4998 2627 52% 0.09K 119 42 476K kmalloc-96
> 4998 3893 77% 0.04K 49 102 196K Acpi-Namespace
> 4736 3887 82% 0.03K 37 128 148K kmalloc-32
> 4144 4144 100% 0.07K 74 56 296K Acpi-ParseExt
> 3740 3740 100% 0.02K 22 170 88K numa_policy
> 3486 3023 86% 0.19K 166 21 664K kmalloc-192
> 3200 2047 63% 0.12K 100 32 400K kmalloc-128
> 2304 2074 90% 0.50K 72 32 1152K kmalloc-512
> 2136 2019 94% 0.64K 89 24 1424K proc_inode_cache
> 2080 2080 100% 0.12K 65 32 260K jfs_mp
> 2024 1890 93% 0.70K 88 23 1408K shmem_inode_cache
> 1632 1556 95% 1.00K 51 32 1632K kmalloc-1024
>
>
> I am wondering if anyone is feeling responsible for this bug or if the mid-term
> solution is to move away from jfs?
I also did some tests, when this bug was first reported... but I couln't
re-produce it... currently I have no idea what is wrong there.
I think moving to ext4 or xfs is the best for now... :(
--
regards, TR
next prev parent reply other threads:[~2012-11-01 18:11 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-09-20 13:52 Out of memory on 3.5 kernels Arend van Spriel
2012-09-21 19:49 ` Nico Schottelius
2012-09-21 21:02 ` Fwd: " Arend van Spriel
2012-09-24 22:43 ` David Rientjes
2012-09-25 15:07 ` Dave Kleikamp
2012-09-26 6:03 ` Nico Schottelius
2012-09-26 6:06 ` Nico Schottelius
2012-09-26 8:57 ` Nico Schottelius
2012-09-27 5:52 ` Nico Schottelius
2012-10-03 21:23 ` Nico Schottelius
2012-10-05 15:48 ` Valdis.Kletnieks
2012-10-05 17:51 ` Nico Schottelius
2012-10-30 10:35 ` Nico Schottelius
2012-11-01 18:04 ` Tino Reichardt [this message]
2012-11-21 22:37 ` Dave Kleikamp
2012-11-27 15:56 ` [Jfs-discussion] " Dave Kleikamp
2012-11-27 16:11 ` Nico Schottelius
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20121101180420.GA24922@mcmilk.de \
--to=list-jfs@mcmilk.de \
--cc=jfs-discussion@lists.sourceforge.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-wireless@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).