From mboxrd@z Thu Jan 1 00:00:00 1970 From: Steve French Subject: Re: [patch] cifs: use add_to_page_cache_lru Date: Thu, 18 Mar 2010 13:43:17 -0500 Message-ID: <524f69651003181143i67d2fbbdw148619b6065bca88@mail.gmail.com> References: <20100317062316.GC2869@laptop> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: linux-fsdevel@vger.kernel.org, linux-cifs-client@lists.samba.org, Steve French To: Nick Piggin Return-path: Received: from mail-gy0-f174.google.com ([209.85.160.174]:58158 "EHLO mail-gy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751317Ab0CRSnT convert rfc822-to-8bit (ORCPT ); Thu, 18 Mar 2010 14:43:19 -0400 Received: by gyg8 with SMTP id 8so1184383gyg.19 for ; Thu, 18 Mar 2010 11:43:18 -0700 (PDT) In-Reply-To: <20100317062316.GC2869@laptop> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Seems reasonable to me (also Shaggy acked this). Any idea how likely it would affect performance (and would thus need to run through some simple iozone and cthon (timing mode) or similar perf testing) On Wed, Mar 17, 2010 at 1:23 AM, Nick Piggin wrote: > cifs: use add_to_page_cache_lru > > add_to_page_cache_lru is exported, so it should be used. Benefits ove= r > using a private pagevec: neater code, 128 bytes fewer stack used, per= cpu > lru ordering is preserved, and finally don't need to flush pagevec > before returning so batching may be shared with other LRU insertions. > > Signed-off-by: Nick Piggin : > --- > =A0fs/cifs/file.c | =A0 13 +++---------- > =A01 file changed, 3 insertions(+), 10 deletions(-) > > Index: linux-2.6/fs/cifs/file.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- linux-2.6.orig/fs/cifs/file.c > +++ linux-2.6/fs/cifs/file.c > @@ -1907,8 +1907,7 @@ int cifs_file_mmap(struct file *file, st > > > =A0static void cifs_copy_cache_pages(struct address_space *mapping, > - =A0 =A0 =A0 struct list_head *pages, int bytes_read, char *data, > - =A0 =A0 =A0 struct pagevec *plru_pvec) > + =A0 =A0 =A0 struct list_head *pages, int bytes_read, char *data) > =A0{ > =A0 =A0 =A0 =A0struct page *page; > =A0 =A0 =A0 =A0char *target; > @@ -1920,7 +1919,7 @@ static void cifs_copy_cache_pages(struct > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0page =3D list_entry(pages->prev, struc= t page, lru); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0list_del(&page->lru); > > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (add_to_page_cache(page, mapping, pa= ge->index, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (add_to_page_cache_lru(page, mapping= , page->index, > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0GFP_KERNEL)) { > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0page_cache_release(pag= e); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0cFYI(1, ("Add page cac= he failed")); > @@ -1946,8 +1945,6 @@ static void cifs_copy_cache_pages(struct > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0flush_dcache_page(page); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0SetPageUptodate(page); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0unlock_page(page); > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (!pagevec_add(plru_pvec, page)) > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 __pagevec_lru_add_file(= plru_pvec); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0data +=3D PAGE_CACHE_SIZE; > =A0 =A0 =A0 =A0} > =A0 =A0 =A0 =A0return; > @@ -1966,7 +1963,6 @@ static int cifs_readpages(struct file *f > =A0 =A0 =A0 =A0unsigned int read_size, i; > =A0 =A0 =A0 =A0char *smb_read_data =3D NULL; > =A0 =A0 =A0 =A0struct smb_com_read_rsp *pSMBr; > - =A0 =A0 =A0 struct pagevec lru_pvec; > =A0 =A0 =A0 =A0struct cifsFileInfo *open_file; > =A0 =A0 =A0 =A0int buf_type =3D CIFS_NO_BUFFER; > > @@ -1980,7 +1976,6 @@ static int cifs_readpages(struct file *f > =A0 =A0 =A0 =A0cifs_sb =3D CIFS_SB(file->f_path.dentry->d_sb); > =A0 =A0 =A0 =A0pTcon =3D cifs_sb->tcon; > > - =A0 =A0 =A0 pagevec_init(&lru_pvec, 0); > =A0 =A0 =A0 =A0cFYI(DBG2, ("rpages: num pages %d", num_pages)); > =A0 =A0 =A0 =A0for (i =3D 0; i < num_pages; ) { > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0unsigned contig_pages; > @@ -2049,7 +2044,7 @@ static int cifs_readpages(struct file *f > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0pSMBr =3D (struct smb_= com_read_rsp *)smb_read_data; > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0cifs_copy_cache_pages(= mapping, page_list, bytes_read, > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0smb_re= ad_data + 4 /* RFC1001 hdr */ + > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 le16_to= _cpu(pSMBr->DataOffset), &lru_pvec); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 le16_to= _cpu(pSMBr->DataOffset)); > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0i +=3D =A0bytes_read >= > PAGE_CACHE_SHIFT; > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0cifs_stats_bytes_read(= pTcon, bytes_read); > @@ -2082,8 +2077,6 @@ static int cifs_readpages(struct file *f > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0bytes_read =3D 0; > =A0 =A0 =A0 =A0} > > - =A0 =A0 =A0 pagevec_lru_add_file(&lru_pvec); > - > =A0/* need to free smb_read_data buf before exit */ > =A0 =A0 =A0 =A0if (smb_read_data) { > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (buf_type =3D=3D CIFS_SMALL_BUFFER) > --=20 Thanks, Steve -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel= " in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html