linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
To: Qu Wenruo <quwenruo.btrfs@gmx.com>,
	"Gustavo A. R. Silva" <gustavoars@kernel.org>,
	David Sterba <dsterba@suse.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: [PATCH] btrfs: add special case to setget helpers for 64k pages
Date: Thu, 1 Jul 2021 19:09:15 -0500	[thread overview]
Message-ID: <dd4346f9-bc3d-b12f-3b32-1e1ecabb5b8b@embeddedor.com> (raw)
In-Reply-To: <fc90ec53-1632-e796-3bf0-f46c5df790bb@gmx.com>



On 7/1/21 18:59, Qu Wenruo wrote:
> 
> 
> On 2021/7/2 上午5:57, Gustavo A. R. Silva wrote:
>> On Thu, Jul 01, 2021 at 06:00:39PM +0200, David Sterba wrote:
>>> On 64K pages the size of the extent_buffer::pages array is 1 and
>>> compilation with -Warray-bounds warns due to
>>>
>>>    kaddr = page_address(eb->pages[idx + 1]);
>>>
>>> when reading byte range crossing page boundary.
>>>
>>> This does never actually overflow the array because on 64K because all
>>> the data fit in one page and bounds are checked by check_setget_bounds.
>>>
>>> To fix the reported overflow and warning add a copy of the non-crossing
>>> read/write code and put it behind a condition that's evaluated at
>>> compile time. That way only one implementation remains due to dead code
>>> elimination.
>>
>> Any chance we can use a flexible-array in struct extent_buffer instead,
>> so all the warnings are removed?
>>
>> Something like this:
>>
>> diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
>> index 62027f551b44..b82e8b694a3b 100644
>> --- a/fs/btrfs/extent_io.h
>> +++ b/fs/btrfs/extent_io.h
>> @@ -94,11 +94,11 @@ struct extent_buffer {
>>
>>          struct rw_semaphore lock;
>>
>> -       struct page *pages[INLINE_EXTENT_BUFFER_PAGES];
>>          struct list_head release_list;
>>   #ifdef CONFIG_BTRFS_DEBUG
>>          struct list_head leak_list;
>>   #endif
>> +       struct page *pages[];
>>   };
> 
> But wouldn't that make the the size of extent_buffer structure change
> and affect the kmem cache for it?

Could you please point out the places in the code that would be
affected?

I'm trying to understand this code and see the possibility of
using a flex-array together with proper memory allocation, so
we can avoid having one-element array in extent_buffer.

Not sure at what extent this would be possible. So, any pointer
is greatly appreciate it. :)

Thanks
--
Gustavo

> 
> Thanks,
> Qu
>>
>>   /*
>>
>> which is actually what is needed in this case to silence the
>> array-bounds warnings: the replacement of the one-element array
>> with a flexible-array member[1] in struct extent_buffer.
>>
>> -- 
>> Gustavo
>>
>> [1] https://www.kernel.org/doc/html/v5.10/process/deprecated.html#zero-length-and-one-element-arrays
>>
>>>
>>> Link: https://lore.kernel.org/lkml/20210623083901.1d49d19d@canb.auug.org.au/
>>> CC: Gustavo A. R. Silva <gustavoars@kernel.org>
>>> Signed-off-by: David Sterba <dsterba@suse.com>
>>> ---
>>>   fs/btrfs/struct-funcs.c | 66 +++++++++++++++++++++++++----------------
>>>   1 file changed, 41 insertions(+), 25 deletions(-)
>>>
>>> diff --git a/fs/btrfs/struct-funcs.c b/fs/btrfs/struct-funcs.c
>>> index 8260f8bb3ff0..51204b280da8 100644
>>> --- a/fs/btrfs/struct-funcs.c
>>> +++ b/fs/btrfs/struct-funcs.c
>>> @@ -73,14 +73,18 @@ u##bits btrfs_get_token_##bits(struct btrfs_map_token *token,        \
>>>       }                                \
>>>       token->kaddr = page_address(token->eb->pages[idx]);        \
>>>       token->offset = idx << PAGE_SHIFT;                \
>>> -    if (oip + size <= PAGE_SIZE)                    \
>>> +    if (INLINE_EXTENT_BUFFER_PAGES == 1) {                \
>>>           return get_unaligned_le##bits(token->kaddr + oip);    \
>>> +    } else {                            \
>>> +        if (oip + size <= PAGE_SIZE)                \
>>> +            return get_unaligned_le##bits(token->kaddr + oip); \
>>>                                       \
>>> -    memcpy(lebytes, token->kaddr + oip, part);            \
>>> -    token->kaddr = page_address(token->eb->pages[idx + 1]);        \
>>> -    token->offset = (idx + 1) << PAGE_SHIFT;            \
>>> -    memcpy(lebytes + part, token->kaddr, size - part);        \
>>> -    return get_unaligned_le##bits(lebytes);                \
>>> +        memcpy(lebytes, token->kaddr + oip, part);        \
>>> +        token->kaddr = page_address(token->eb->pages[idx + 1]);    \
>>> +        token->offset = (idx + 1) << PAGE_SHIFT;        \
>>> +        memcpy(lebytes + part, token->kaddr, size - part);    \
>>> +        return get_unaligned_le##bits(lebytes);            \
>>> +    }                                \
>>>   }                                    \
>>>   u##bits btrfs_get_##bits(const struct extent_buffer *eb,        \
>>>                const void *ptr, unsigned long off)        \
>>> @@ -94,13 +98,17 @@ u##bits btrfs_get_##bits(const struct extent_buffer *eb,        \
>>>       u8 lebytes[sizeof(u##bits)];                    \
>>>                                       \
>>>       ASSERT(check_setget_bounds(eb, ptr, off, size));        \
>>> -    if (oip + size <= PAGE_SIZE)                    \
>>> +    if (INLINE_EXTENT_BUFFER_PAGES == 1) {                \
>>>           return get_unaligned_le##bits(kaddr + oip);        \
>>> +    } else {                            \
>>> +        if (oip + size <= PAGE_SIZE)                \
>>> +            return get_unaligned_le##bits(kaddr + oip);    \
>>>                                       \
>>> -    memcpy(lebytes, kaddr + oip, part);                \
>>> -    kaddr = page_address(eb->pages[idx + 1]);            \
>>> -    memcpy(lebytes + part, kaddr, size - part);            \
>>> -    return get_unaligned_le##bits(lebytes);                \
>>> +        memcpy(lebytes, kaddr + oip, part);            \
>>> +        kaddr = page_address(eb->pages[idx + 1]);        \
>>> +        memcpy(lebytes + part, kaddr, size - part);        \
>>> +        return get_unaligned_le##bits(lebytes);            \
>>> +    }                                \
>>>   }                                    \
>>>   void btrfs_set_token_##bits(struct btrfs_map_token *token,        \
>>>                   const void *ptr, unsigned long off,        \
>>> @@ -124,15 +132,19 @@ void btrfs_set_token_##bits(struct btrfs_map_token *token,        \
>>>       }                                \
>>>       token->kaddr = page_address(token->eb->pages[idx]);        \
>>>       token->offset = idx << PAGE_SHIFT;                \
>>> -    if (oip + size <= PAGE_SIZE) {                    \
>>> +    if (INLINE_EXTENT_BUFFER_PAGES == 1) {                \
>>>           put_unaligned_le##bits(val, token->kaddr + oip);    \
>>> -        return;                            \
>>> +    } else {                            \
>>> +        if (oip + size <= PAGE_SIZE) {                \
>>> +            put_unaligned_le##bits(val, token->kaddr + oip); \
>>> +            return;                        \
>>> +        }                            \
>>> +        put_unaligned_le##bits(val, lebytes);            \
>>> +        memcpy(token->kaddr + oip, lebytes, part);        \
>>> +        token->kaddr = page_address(token->eb->pages[idx + 1]);    \
>>> +        token->offset = (idx + 1) << PAGE_SHIFT;        \
>>> +        memcpy(token->kaddr, lebytes + part, size - part);    \
>>>       }                                \
>>> -    put_unaligned_le##bits(val, lebytes);                \
>>> -    memcpy(token->kaddr + oip, lebytes, part);            \
>>> -    token->kaddr = page_address(token->eb->pages[idx + 1]);        \
>>> -    token->offset = (idx + 1) << PAGE_SHIFT;            \
>>> -    memcpy(token->kaddr, lebytes + part, size - part);        \
>>>   }                                    \
>>>   void btrfs_set_##bits(const struct extent_buffer *eb, void *ptr,    \
>>>                 unsigned long off, u##bits val)            \
>>> @@ -146,15 +158,19 @@ void btrfs_set_##bits(const struct extent_buffer *eb, void *ptr,    \
>>>       u8 lebytes[sizeof(u##bits)];                    \
>>>                                       \
>>>       ASSERT(check_setget_bounds(eb, ptr, off, size));        \
>>> -    if (oip + size <= PAGE_SIZE) {                    \
>>> +    if (INLINE_EXTENT_BUFFER_PAGES == 1) {                \
>>>           put_unaligned_le##bits(val, kaddr + oip);        \
>>> -        return;                            \
>>> -    }                                \
>>> +    } else {                            \
>>> +        if (oip + size <= PAGE_SIZE) {                \
>>> +            put_unaligned_le##bits(val, kaddr + oip);    \
>>> +            return;                        \
>>> +        }                            \
>>>                                       \
>>> -    put_unaligned_le##bits(val, lebytes);                \
>>> -    memcpy(kaddr + oip, lebytes, part);                \
>>> -    kaddr = page_address(eb->pages[idx + 1]);            \
>>> -    memcpy(kaddr, lebytes + part, size - part);            \
>>> +        put_unaligned_le##bits(val, lebytes);            \
>>> +        memcpy(kaddr + oip, lebytes, part);            \
>>> +        kaddr = page_address(eb->pages[idx + 1]);        \
>>> +        memcpy(kaddr, lebytes + part, size - part);        \
>>> +    }                                \
>>>   }
>>>
>>>   DEFINE_BTRFS_SETGET_BITS(8)
>>> -- 
>>> 2.31.1
>>>

  reply	other threads:[~2021-07-02  0:30 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-01 16:00 [PATCH] btrfs: add special case to setget helpers for 64k pages David Sterba
2021-07-01 21:57 ` Gustavo A. R. Silva
2021-07-01 23:59   ` Qu Wenruo
2021-07-02  0:09     ` Gustavo A. R. Silva [this message]
2021-07-02  0:21       ` Qu Wenruo
2021-07-02  0:39         ` Gustavo A. R. Silva
2021-07-02  0:39           ` Qu Wenruo
2021-07-02  1:09             ` Gustavo A. R. Silva
2021-07-02 10:22           ` David Sterba
2021-07-02  7:10 ` Christoph Hellwig
2021-07-02 11:06   ` David Sterba
2021-07-05  8:33     ` Christoph Hellwig
2021-07-08 14:34       ` David Sterba
2021-07-14 23:37         ` Gustavo A. R. Silva
2021-07-28 15:32           ` David Sterba
2021-07-28 16:00             ` David Sterba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dd4346f9-bc3d-b12f-3b32-1e1ecabb5b8b@embeddedor.com \
    --to=gustavo@embeddedor.com \
    --cc=dsterba@suse.com \
    --cc=gustavoars@kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).