From: kernel test robot <lkp@intel.com>
To: Gaurav Gangalwar <gaurav.gangalwar@gmail.com>,
trondmy@kernel.org, anna@kernel.org, tom@talpey.com,
chuck.lever@oracle.com
Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev,
linux-nfs@vger.kernel.org,
Gaurav Gangalwar <gaurav.gangalwar@gmail.com>
Subject: Re: [PATCH] nfs: Implement delayed data server destruction with hold cache
Date: Wed, 19 Nov 2025 19:20:00 +0800 [thread overview]
Message-ID: <202511191852.nGdrhdUC-lkp@intel.com> (raw)
In-Reply-To: <20251118105752.52098-1-gaurav.gangalwar@gmail.com>
Hi Gaurav,
kernel test robot noticed the following build warnings:
[auto build test WARNING on trondmy-nfs/linux-next]
[also build test WARNING on linus/master v6.18-rc6 next-20251119]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Gaurav-Gangalwar/nfs-Implement-delayed-data-server-destruction-with-hold-cache/20251118-190020
base: git://git.linux-nfs.org/projects/trondmy/linux-nfs.git linux-next
patch link: https://lore.kernel.org/r/20251118105752.52098-1-gaurav.gangalwar%40gmail.com
patch subject: [PATCH] nfs: Implement delayed data server destruction with hold cache
config: arm-lpc32xx_defconfig (https://download.01.org/0day-ci/archive/20251119/202511191852.nGdrhdUC-lkp@intel.com/config)
compiler: clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251119/202511191852.nGdrhdUC-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202511191852.nGdrhdUC-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> fs/nfs/pnfs_nfs.c:753:6: warning: variable 'active_count' set but not used [-Wunused-but-set-variable]
753 | int active_count = 0, hold_count = 0, expired_count = 0;
| ^
>> fs/nfs/pnfs_nfs.c:753:24: warning: variable 'hold_count' set but not used [-Wunused-but-set-variable]
753 | int active_count = 0, hold_count = 0, expired_count = 0;
| ^
>> fs/nfs/pnfs_nfs.c:753:40: warning: variable 'expired_count' set but not used [-Wunused-but-set-variable]
753 | int active_count = 0, hold_count = 0, expired_count = 0;
| ^
3 warnings generated.
vim +/active_count +753 fs/nfs/pnfs_nfs.c
741
742 /*
743 * Periodic cleanup task to check hold cache and destroy expired DS entries
744 */
745 void nfs4_pnfs_ds_cleanup_work(struct work_struct *work)
746 {
747 struct nfs_net *nn = container_of(work, struct nfs_net,
748 nfs4_data_server_cleanup_work.work);
749 struct nfs4_pnfs_ds *ds, *tmp;
750 LIST_HEAD(destroy_list);
751 unsigned long grace_period = nfs4_pnfs_ds_grace_period * HZ;
752 unsigned long now = jiffies;
> 753 int active_count = 0, hold_count = 0, expired_count = 0;
754
755 dprintk("NFS: DS cleanup work started for namespace (jiffies=%lu)\n", now);
756
757 spin_lock(&nn->nfs4_data_server_lock);
758
759 /* Count entries in active cache */
760 list_for_each_entry(ds, &nn->nfs4_data_server_cache, ds_node)
761 active_count++;
762
763 /* Process hold cache */
764 list_for_each_entry_safe(ds, tmp, &nn->nfs4_data_server_hold_cache, ds_node) {
765 unsigned long time_since_last_access = now - ds->ds_last_access;
766
767 hold_count++;
768 if (time_since_last_access >= grace_period) {
769 /* Grace period expired, move to destroy list */
770 dprintk("NFS: DS cleanup task destroying expired DS: %s (idle for %lu seconds)\n",
771 ds->ds_remotestr, time_since_last_access / HZ);
772 list_move(&ds->ds_node, &destroy_list);
773 expired_count++;
774 } else {
775 dprintk("NFS: DS %s in hold cache (idle for %lu seconds, %lu seconds remaining)\n",
776 ds->ds_remotestr, time_since_last_access / HZ,
777 (grace_period - time_since_last_access) / HZ);
778 }
779 }
780
781 spin_unlock(&nn->nfs4_data_server_lock);
782
783 dprintk("NFS: DS cleanup work: active_cache=%d, hold_cache=%d, expired=%d\n",
784 active_count, hold_count, expired_count);
785
786 /* Destroy DS entries outside of lock */
787 list_for_each_entry_safe(ds, tmp, &destroy_list, ds_node) {
788 list_del_init(&ds->ds_node);
789 destroy_ds(ds);
790 }
791
792 /* Reschedule cleanup task */
793 dprintk("NFS: DS cleanup work completed, rescheduling in %u seconds\n",
794 nfs4_pnfs_ds_cleanup_interval);
795 schedule_delayed_work(&nn->nfs4_data_server_cleanup_work,
796 nfs4_pnfs_ds_cleanup_interval * HZ);
797 }
798 EXPORT_SYMBOL_GPL(nfs4_pnfs_ds_cleanup_work);
799
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
parent reply other threads:[~2025-11-19 11:20 UTC|newest]
Thread overview: expand[flat|nested] mbox.gz Atom feed
[parent not found: <20251118105752.52098-1-gaurav.gangalwar@gmail.com>]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202511191852.nGdrhdUC-lkp@intel.com \
--to=lkp@intel.com \
--cc=anna@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=gaurav.gangalwar@gmail.com \
--cc=linux-nfs@vger.kernel.org \
--cc=llvm@lists.linux.dev \
--cc=oe-kbuild-all@lists.linux.dev \
--cc=tom@talpey.com \
--cc=trondmy@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).