public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Luis Chamberlain <mcgrof@kernel.org>
Cc: Hannes Reinecke <hare@suse.de>,
	Pankaj Raghav <p.raghav@samsung.com>,
	brauner@kernel.org, viro@zeniv.linux.org.uk,
	akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, gost.dev@samsung.com
Subject: Re: [RFC 0/4] convert create_page_buffers to create_folio_buffers
Date: Sat, 15 Apr 2023 03:31:54 +0100	[thread overview]
Message-ID: <ZDoMmtcwNTINAu3N@casper.infradead.org> (raw)
In-Reply-To: <ZDn3XPMA024t+C1x@bombadil.infradead.org>

On Fri, Apr 14, 2023 at 06:01:16PM -0700, Luis Chamberlain wrote:
> a) dynamically allocate those now
> b) do a cursory review of the users of that and prepare them
>    to grok buffer heads which are blocksize based rather than
>    PAGE_SIZE based. So we just try to kill MAX_BUF_PER_PAGE.
> 
> Without a) I think buffers after PAGE_SIZE won't get submit_bh() or lock for
> bs > PAGE_SIZE right now.

Worse, we'll overflow the array and corrupt the stack.

This one is a simple fix ...

+++ b/fs/buffer.c
@@ -2282,7 +2282,7 @@ int block_read_full_folio(struct folio *folio, get_block_t *get_block)
 {
        struct inode *inode = folio->mapping->host;
        sector_t iblock, lblock;
-       struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE];
+       struct buffer_head *bh, *head;
        unsigned int blocksize, bbits;
        int nr, i;
        int fully_mapped = 1;
@@ -2335,7 +2335,6 @@ int block_read_full_folio(struct folio *folio, get_block_t *get_block)
                        if (buffer_uptodate(bh))
                                continue;
                }
-               arr[nr++] = bh;
        } while (i++, iblock++, (bh = bh->b_this_page) != head);
 
        if (fully_mapped)
@@ -2353,24 +2352,27 @@ int block_read_full_folio(struct folio *folio, get_block_t *get_block)
        }
 
        /* Stage two: lock the buffers */
-       for (i = 0; i < nr; i++) {
-               bh = arr[i];
+       bh = head;
+       do {
                lock_buffer(bh);
                mark_buffer_async_read(bh);
-       }
+               bh = bh->b_this_page;
+       } while (bh != head);
 
        /*
         * Stage 3: start the IO.  Check for uptodateness
         * inside the buffer lock in case another process reading
         * the underlying blockdev brought it uptodate (the sct fix).
         */
-       for (i = 0; i < nr; i++) {
-               bh = arr[i];
+       bh = head;
+       do {
                if (buffer_uptodate(bh))
                        end_buffer_async_read(bh, 1);
                else
                        submit_bh(REQ_OP_READ, bh);
-       }
+               bh = bh->b_this_page;
+       } while (bh != head);
+
        return 0;
 }
 EXPORT_SYMBOL(block_read_full_folio);


  reply	other threads:[~2023-04-15  2:32 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20230414110825eucas1p1ed4d16627889ef8542dfa31b1183063d@eucas1p1.samsung.com>
2023-04-14 11:08 ` [RFC 0/4] convert create_page_buffers to create_folio_buffers Pankaj Raghav
2023-04-14 11:08   ` [RFC 1/4] fs/buffer: add set_bh_folio helper Pankaj Raghav
2023-04-14 11:08   ` [RFC 2/4] buffer: add alloc_folio_buffers() helper Pankaj Raghav
2023-04-14 13:06     ` Matthew Wilcox
2023-04-14 15:01       ` Pankaj Raghav
2023-04-14 11:08   ` [RFC 3/4] fs/buffer: add folio_create_empty_buffers helper Pankaj Raghav
2023-04-14 13:16     ` Matthew Wilcox
2023-04-14 11:08   ` [RFC 4/4] fs/buffer: convert create_page_buffers to create_folio_buffers Pankaj Raghav
2023-04-14 13:21     ` Matthew Wilcox
2023-04-14 13:47   ` [RFC 0/4] " Hannes Reinecke
2023-04-14 13:51     ` Matthew Wilcox
2023-04-14 13:56       ` Hannes Reinecke
2023-04-14 15:00     ` Pankaj Raghav
2023-04-15  1:01     ` Luis Chamberlain
2023-04-15  2:31       ` Matthew Wilcox [this message]
2023-04-15  3:24         ` Luis Chamberlain
2023-04-15  3:44           ` Matthew Wilcox
2023-04-15 13:14             ` Hannes Reinecke
2023-04-15 17:09               ` Matthew Wilcox
2023-04-16  1:28                 ` Luis Chamberlain
2023-04-16  3:40                   ` Matthew Wilcox
2023-04-16  5:26                     ` Luis Chamberlain
2023-04-16 14:07                       ` Matthew Wilcox
2023-04-17 15:40                         ` Darrick J. Wong
2023-04-16 22:57                       ` Dave Chinner
2023-04-17  2:27     ` Luis Chamberlain
2023-04-17  6:04       ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZDoMmtcwNTINAu3N@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=brauner@kernel.org \
    --cc=gost.dev@samsung.com \
    --cc=hare@suse.de \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mcgrof@kernel.org \
    --cc=p.raghav@samsung.com \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox