From: Eric Sandeen <sandeen@redhat.com>
To: Wang Sheng-Hui <crosslonelyover@gmail.com>
Cc: linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk,
linux-mm@kvack.org, linux-ext4 <linux-ext4@vger.kernel.org>,
kernel-janitors <kernel-janitors@vger.kernel.org>,
Christoph Hellwig <hch@infradead.org>
Subject: Re: [PATCH 1/2 RESEND] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
Date: Tue, 20 Jul 2010 11:49:52 -0500 [thread overview]
Message-ID: <4C45D3B0.7030202@redhat.com> (raw)
In-Reply-To: <4C447CE9.20904@redhat.com>
Eric Sandeen wrote:
...
> Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Actually retract that, as Andreas pointed out:
>> fs/mbcache.c | 22 +++++++++++-----------
>> 1 files changed, 11 insertions(+), 11 deletions(-)
>>
>> diff --git a/fs/mbcache.c b/fs/mbcache.c
>> index ec88ff3..5697d9e 100644
>> --- a/fs/mbcache.c
>> +++ b/fs/mbcache.c
>> @@ -201,21 +201,13 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
>> {
>> LIST_HEAD(free_list);
>> struct list_head *l, *ltmp;
>> + struct mb_cache *cache;
>> int count = 0;
>>
>> - spin_lock(&mb_cache_spinlock);
you've lost this spin_lock ...
>> - list_for_each(l, &mb_cache_list) {
>> - struct mb_cache *cache =
>> - list_entry(l, struct mb_cache, c_cache_list);
>> - mb_debug("cache %s (%d)", cache->c_name,
>> - atomic_read(&cache->c_entry_count));
>> - count += atomic_read(&cache->c_entry_count);
>> - }
>> mb_debug("trying to free %d entries", nr_to_scan);
>> - if (nr_to_scan == 0) {
>> - spin_unlock(&mb_cache_spinlock);
>> + if (nr_to_scan == 0)
>> goto out;
>> - }
>> +
and here you're iterating over it while unlocked....
>> while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
>> struct mb_cache_entry *ce =
>> list_entry(mb_cache_lru_list.next,
struct mb_cache_entry, e_lru_list);
list_move_tail(&ce->e_lru_list, &free_list);
__mb_cache_entry_unhash(ce);
}
spin_unlock(&mb_cache_spinlock);
.... and here you unlock an unlocked spinlock.
Sorry I missed that.
-Eric
>> @@ -229,6 +221,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
>> e_lru_list), gfp_mask);
>> }
>> out:
>> + spin_lock(&mb_cache_spinlock);
>> + list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
>> + mb_debug("cache %s (%d)", cache->c_name,
>> + atomic_read(&cache->c_entry_count));
>> + count += atomic_read(&cache->c_entry_count);
>> + }
>> + spin_unlock(&mb_cache_spinlock);
>> +
>> return (count / 100) * sysctl_vfs_cache_pressure;
>> }
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
prev parent reply other threads:[~2010-07-20 16:49 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-07-18 13:57 [PATCH 1/2 RESEND] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Wang Sheng-Hui
2010-07-18 13:58 ` shenghui
2010-07-19 16:27 ` Eric Sandeen
2010-07-20 16:49 ` Eric Sandeen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C45D3B0.7030202@redhat.com \
--to=sandeen@redhat.com \
--cc=crosslonelyover@gmail.com \
--cc=hch@infradead.org \
--cc=kernel-janitors@vger.kernel.org \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).