From: James Simmons <jsimmons@infradead.org>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
devel@driverdev.osuosl.org,
Andreas Dilger <andreas.dilger@intel.com>,
Oleg Drokin <oleg.drokin@intel.com>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Lustre Development List <lustre-devel@lists.lustre.org>,
Alex Zhuravlev <alexey.zhuravlev@intel.com>,
James Simmons <jsimmons@infradead.org>
Subject: [PATCH 07/22] staging: lustre: obdclass: lu_site_purge() to handle purge-all
Date: Fri, 2 Dec 2016 19:53:14 -0500 [thread overview]
Message-ID: <1480726409-20350-8-git-send-email-jsimmons@infradead.org> (raw)
In-Reply-To: <1480726409-20350-1-git-send-email-jsimmons@infradead.org>
From: Alex Zhuravlev <alexey.zhuravlev@intel.com>
if the callers wants to purge all objects, then scanning
should start from the first bucket.
Signed-off-by: Alex Zhuravlev <alexey.zhuravlev@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-7038
Reviewed-on: http://review.whamcloud.com/18505
Reviewed-by: Mike Pershin <mike.pershin@intel.com>
Reviewed-by: Faccini Bruno <bruno.faccini@intel.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
drivers/staging/lustre/lustre/obdclass/lu_object.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/lustre/lustre/obdclass/lu_object.c b/drivers/staging/lustre/lustre/obdclass/lu_object.c
index 43868ed..a02aaa3 100644
--- a/drivers/staging/lustre/lustre/obdclass/lu_object.c
+++ b/drivers/staging/lustre/lustre/obdclass/lu_object.c
@@ -338,7 +338,7 @@ int lu_site_purge(const struct lu_env *env, struct lu_site *s, int nr)
struct cfs_hash_bd bd2;
struct list_head dispose;
int did_sth;
- unsigned int start;
+ unsigned int start = 0;
int count;
int bnr;
unsigned int i;
@@ -351,7 +351,8 @@ int lu_site_purge(const struct lu_env *env, struct lu_site *s, int nr)
* Under LRU list lock, scan LRU list and move unreferenced objects to
* the dispose list, removing them from LRU and hash table.
*/
- start = s->ls_purge_start;
+ if (nr != ~0)
+ start = s->ls_purge_start;
bnr = (nr == ~0) ? -1 : nr / (int)CFS_HASH_NBKT(s->ls_obj_hash) + 1;
again:
/*
--
1.7.1
next prev parent reply other threads:[~2016-12-03 1:11 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-03 0:53 [PATCH 00/22] Next batch of missing work for upstream client James Simmons
2016-12-03 0:53 ` [PATCH 01/22] staging: lustre: llite: clear LLIF_DATA_MODIFIED in atomic James Simmons
2016-12-03 0:53 ` [PATCH 02/22] staging: lustre: osc: fix debug log message formatting James Simmons
2016-12-03 0:53 ` [PATCH 03/22] staging: lustre: mdt: race between open and migrate James Simmons
2016-12-03 0:53 ` [PATCH 04/22] staging: lustre: osc: handle osc eviction correctly James Simmons
2016-12-05 20:55 ` Dan Carpenter
2016-12-05 23:03 ` Oleg Drokin
2016-12-07 23:16 ` James Simmons
2016-12-03 0:53 ` [PATCH 05/22] staging: lustre: lmv: remove nlink check in lmv_revalidate_slaves James Simmons
2016-12-05 20:57 ` Dan Carpenter
2016-12-03 0:53 ` [PATCH 06/22] staging: lustre: llog: reset llog bitmap James Simmons
2016-12-03 0:53 ` James Simmons [this message]
2016-12-03 0:53 ` [PATCH 08/22] staging: lustre: clio: revise read ahead algorithm James Simmons
2016-12-03 0:53 ` [PATCH 09/22] staging: lustre: llite: Add client mount opt to ignore suppress_pings James Simmons
2016-12-03 0:53 ` [PATCH 10/22] staging: lustre: obdclass: limit lu_site hash table size on clients James Simmons
2016-12-03 0:53 ` [PATCH 11/22] staging: lustre: mdt: fail FMODE_WRITE open if the client is read only James Simmons
2016-12-03 0:53 ` [PATCH 12/22] staging: lustre: libcfs: report hnode value for cfs_hash_putref James Simmons
2016-12-03 0:53 ` [PATCH 13/22] staging: lustre: statahead: set sai_index_wait with lli_sa_lock held James Simmons
2016-12-03 0:53 ` [PATCH 14/22] staging: lustre: obd: add callback for llog_cat_process_or_fork James Simmons
2016-12-06 9:59 ` Greg Kroah-Hartman
2016-12-03 0:53 ` [PATCH 15/22] staging: lustre: rpc: increase bulk size James Simmons
2016-12-03 0:53 ` [PATCH 16/22] staging: lustre: llite: Invoke file_update_time in page_mkwrite James Simmons
2016-12-03 0:53 ` [PATCH 17/22] staging: lustre: clio: remove mtime check in vvp_io_fault_start() James Simmons
2016-12-03 0:53 ` [PATCH 18/22] staging: lustre: import: don't reconnect during connect interpret James Simmons
2016-12-03 0:53 ` [PATCH 19/22] staging: lustre: llite: ll_dir_ioctl cleanup of redundant comparisons James Simmons
2016-12-03 0:53 ` [PATCH 20/22] staging: lustre: osc: set lock data for readahead lock James Simmons
2016-12-03 0:53 ` [PATCH 21/22] staging: lustre: remove set but unused variables James Simmons
2016-12-03 0:53 ` [PATCH 22/22] staging: lustre: libcfs: remove lnet upcall code James Simmons
2016-12-06 10:00 ` [PATCH 00/22] Next batch of missing work for upstream client Greg Kroah-Hartman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1480726409-20350-8-git-send-email-jsimmons@infradead.org \
--to=jsimmons@infradead.org \
--cc=alexey.zhuravlev@intel.com \
--cc=andreas.dilger@intel.com \
--cc=devel@driverdev.osuosl.org \
--cc=gregkh@linuxfoundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lustre-devel@lists.lustre.org \
--cc=oleg.drokin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).