public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
From: kernel test robot <lkp@intel.com>
To: Chuck Lever <cel@kernel.org>, NeilBrown <neilb@ownmail.net>,
	Jeff Layton <jlayton@kernel.org>,
	Olga Kornievskaia <okorniev@redhat.com>,
	Dai Ngo <dai.ngo@oracle.com>, Tom Talpey <tom@talpey.com>,
	Leon Romanovsky <leon@kernel.org>, Christoph Hellwig <hch@lst.de>
Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev,
	linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org,
	Chuck Lever <chuck.lever@oracle.com>
Subject: Re: [PATCH v2 2/2] svcrdma: Use contiguous pages for RDMA Read sink buffers
Date: Sat, 14 Mar 2026 13:06:04 +0800	[thread overview]
Message-ID: <202603141225.oTCKSz8H-lkp@intel.com> (raw)
In-Reply-To: <20260312134008.7387-3-cel@kernel.org>

Hi Chuck,

kernel test robot noticed the following build errors:

[auto build test ERROR on v7.0-rc1]
[also build test ERROR on next-20260311]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Chuck-Lever/RDMA-rw-Fix-MR-pool-exhaustion-in-bvec-RDMA-READ-path/20260313-085521
base:   v7.0-rc1
patch link:    https://lore.kernel.org/r/20260312134008.7387-3-cel%40kernel.org
patch subject: [PATCH v2 2/2] svcrdma: Use contiguous pages for RDMA Read sink buffers
config: riscv-allyesconfig (https://download.01.org/0day-ci/archive/20260314/202603141225.oTCKSz8H-lkp@intel.com/config)
compiler: clang version 16.0.6 (https://github.com/llvm/llvm-project 7cbf1a2591520c2491aa35339f227775f4d3adf6)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260314/202603141225.oTCKSz8H-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603141225.oTCKSz8H-lkp@intel.com/

All errors (new ones prefixed by >>):

>> net/sunrpc/xprtrdma/svc_rdma_rw.c:813:3: error: call to undeclared function 'svc_rqst_page_release'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
                   svc_rqst_page_release(rqstp,
                   ^
   net/sunrpc/xprtrdma/svc_rdma_rw.c:813:3: note: did you mean 'svc_rdma_cc_release'?
   net/sunrpc/xprtrdma/svc_rdma_rw.c:193:6: note: 'svc_rdma_cc_release' declared here
   void svc_rdma_cc_release(struct svcxprt_rdma *rdma,
        ^
   1 error generated.


vim +/svc_rqst_page_release +813 net/sunrpc/xprtrdma/svc_rdma_rw.c

   779	
   780	/*
   781	 * svc_rdma_fill_contig_bvec - Replace rq_pages with a contiguous allocation
   782	 * @rqstp: RPC transaction context
   783	 * @head: context for ongoing I/O
   784	 * @bv: bvec entry to fill
   785	 * @pages_left: number of data pages remaining in the segment
   786	 * @len_left: bytes remaining in the segment
   787	 *
   788	 * On success, fills @bv with a bvec spanning the contiguous range and
   789	 * advances rc_curpage/rc_page_count. Returns the byte length covered,
   790	 * or zero if the allocation failed or would overrun rq_maxpages.
   791	 */
   792	static unsigned int
   793	svc_rdma_fill_contig_bvec(struct svc_rqst *rqstp,
   794				  struct svc_rdma_recv_ctxt *head,
   795				  struct bio_vec *bv, unsigned int pages_left,
   796				  unsigned int len_left)
   797	{
   798		unsigned int order, alloc_nr, chunk_pages, chunk_len, i;
   799		struct page *page;
   800	
   801		page = svc_rdma_alloc_read_pages(pages_left, &order);
   802		if (!page)
   803			return 0;
   804		alloc_nr = 1 << order;
   805	
   806		if (head->rc_curpage + alloc_nr > rqstp->rq_maxpages) {
   807			for (i = 0; i < alloc_nr; i++)
   808				__free_page(page + i);
   809			return 0;
   810		}
   811	
   812		for (i = 0; i < alloc_nr; i++) {
 > 813			svc_rqst_page_release(rqstp,
   814					      rqstp->rq_pages[head->rc_curpage + i]);
   815			rqstp->rq_pages[head->rc_curpage + i] = page + i;
   816		}
   817	
   818		chunk_pages = min(alloc_nr, pages_left);
   819		chunk_len = min_t(unsigned int, chunk_pages << PAGE_SHIFT, len_left);
   820		bvec_set_page(bv, page, chunk_len, 0);
   821		head->rc_page_count += chunk_pages;
   822		head->rc_curpage += chunk_pages;
   823		return chunk_len;
   824	}
   825	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

      parent reply	other threads:[~2026-03-14  5:06 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-12 13:40 [PATCH v2 0/2] RDMA/rw: Fix MR pool exhaustion in bvec RDMA READ path Chuck Lever
2026-03-12 13:40 ` [PATCH v2 1/2] " Chuck Lever
2026-03-12 13:40 ` [PATCH v2 2/2] svcrdma: Use contiguous pages for RDMA Read sink buffers Chuck Lever
2026-03-13  8:51   ` kernel test robot
2026-03-13 12:34     ` Chuck Lever
2026-03-13 17:31   ` kernel test robot
2026-03-14  5:06   ` kernel test robot [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202603141225.oTCKSz8H-lkp@intel.com \
    --to=lkp@intel.com \
    --cc=cel@kernel.org \
    --cc=chuck.lever@oracle.com \
    --cc=dai.ngo@oracle.com \
    --cc=hch@lst.de \
    --cc=jlayton@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=llvm@lists.linux.dev \
    --cc=neilb@ownmail.net \
    --cc=oe-kbuild-all@lists.linux.dev \
    --cc=okorniev@redhat.com \
    --cc=tom@talpey.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox