From: Eric Biggers <ebiggers3@gmail.com>
To: Chandan Rajendra <chandan@linux.vnet.ibm.com>
Cc: linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, tytso@mit.edu
Subject: Re: [RFC PATCH 7/8] ext4: decrypt blocks whose size is less than page size
Date: Tue, 16 Jan 2018 18:39:25 -0800 [thread overview]
Message-ID: <20180117023925.GG4477@zzz.localdomain> (raw)
In-Reply-To: <20180112141129.27507-8-chandan@linux.vnet.ibm.com>
On Fri, Jan 12, 2018 at 07:41:28PM +0530, Chandan Rajendra wrote:
> This commit adds code to decrypt all file blocks mapped by page.
>
[...]
> diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
> index 0be590b..e494e2d 100644
> --- a/fs/ext4/readpage.c
> +++ b/fs/ext4/readpage.c
> @@ -62,6 +62,143 @@ static inline bool ext4_bio_encrypted(struct bio *bio)
> #endif
> }
>
> +static void ext4_complete_block(struct work_struct *work)
> +{
> + struct fscrypt_ctx *ctx =
> + container_of(work, struct fscrypt_ctx, r.work);
> + struct buffer_head *first, *bh, *tmp;
> + struct bio *bio;
> + struct bio_vec *bv;
> + struct page *page;
> + struct inode *inode;
> + u64 blk_nr;
> + unsigned long flags;
> + int page_uptodate = 1;
> + int ret;
> +
> + bio = ctx->r.bio;
> + BUG_ON(bio->bi_vcnt != 1);
> +
> + bv = bio->bi_io_vec;
> + page = bv->bv_page;
> + inode = page->mapping->host;
> +
> + BUG_ON(bv->bv_len != i_blocksize(inode));
> +
> + blk_nr = page->index << (PAGE_SHIFT - inode->i_blkbits);
> + blk_nr += bv->bv_offset >> inode->i_blkbits;
> +
> + bh = ctx->r.bh;
> +
> + ret = fscrypt_decrypt_page(inode, page, bv->bv_len,
> + bv->bv_offset, blk_nr);
> + if (ret) {
> + WARN_ON_ONCE(1);
> + SetPageError(page);
> + } else {
> + set_buffer_uptodate(bh);
> + }
> +
> + fscrypt_release_ctx(ctx);
> + bio_put(bio);
> +
> + first = page_buffers(page);
> + local_irq_save(flags);
> + bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
> +
> + clear_buffer_async_read(bh);
> + unlock_buffer(bh);
> + tmp = bh;
> + do {
> + if (!buffer_uptodate(tmp))
> + page_uptodate = 0;
> + if (buffer_async_read(tmp)) {
> + BUG_ON(!buffer_locked(tmp));
> + goto still_busy;
> + }
> + tmp = tmp->b_this_page;
> + } while (tmp != bh);
> +
> + bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
> + local_irq_restore(flags);
> +
> + if (page_uptodate && !PageError(page))
> + SetPageUptodate(page);
> + unlock_page(page);
> + return;
> +
> +still_busy:
> + bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
> + local_irq_restore(flags);
> + return;
> +}
First, see my comments on Patch 2 regarding whether we should really be
copy+pasting all this code from fs/buffer.c into ext4.
If nevertheless we really do have to have this, why can't we call
end_buffer_async_read() from ext4_complete_block() and from block_end_io(),
rather than duplicating it?
> +
> +static void block_end_io(struct bio *bio)
> +{
[...]
>
> +int ext4_block_read_full_page(struct page *page)
> +{
> + struct inode *inode = page->mapping->host;
> + struct fscrypt_ctx *ctx;
> + struct bio *bio;
> + sector_t iblock, lblock;
> + struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE];
> + unsigned int blocksize, bbits;
> + int nr, i;
> + int fully_mapped = 1;
> + int ret;
> +
> + head = create_page_buffers(page, inode, 0);
> + blocksize = head->b_size;
> + bbits = block_size_bits(blocksize);
> +
> + iblock = (sector_t)page->index << (PAGE_SHIFT - bbits);
> + lblock = (i_size_read(inode)+blocksize-1) >> bbits;
> + bh = head;
> + nr = 0;
> + i = 0;
> +
> + do {
> + if (buffer_uptodate(bh))
> + continue;
> +
> + if (!buffer_mapped(bh)) {
> + int err = 0;
> +
> + fully_mapped = 0;
> + if (iblock < lblock) {
> + WARN_ON(bh->b_size != blocksize);
> + err = ext4_get_block(inode, iblock, bh, 0);
> + if (err)
> + SetPageError(page);
> + }
> + if (!buffer_mapped(bh)) {
> + zero_user(page, i << bbits, blocksize);
> + if (!err)
> + set_buffer_uptodate(bh);
> + continue;
> + }
> + /*
> + * get_block() might have updated the buffer
> + * synchronously
> + */
> + if (buffer_uptodate(bh))
> + continue;
> + }
> + arr[nr++] = bh;
> + } while (i++, iblock++, (bh = bh->b_this_page) != head);
> +
> + if (fully_mapped)
> + SetPageMappedToDisk(page);
> +
> + if (!nr) {
> + /*
> + * All buffers are uptodate - we can set the page uptodate
> + * as well. But not if ext4_get_block() returned an error.
> + */
> + if (!PageError(page))
> + SetPageUptodate(page);
> + unlock_page(page);
> + return 0;
> + }
> +
> + /* Stage two: lock the buffers */
> + for (i = 0; i < nr; i++) {
> + bh = arr[i];
> + lock_buffer(bh);
> + set_buffer_async_read(bh);
> + }
> +
> + /*
> + * Stage 3: start the IO. Check for uptodateness
> + * inside the buffer lock in case another process reading
> + * the underlying blockdev brought it uptodate (the sct fix).
> + */
> + for (i = 0; i < nr; i++) {
> + ctx = NULL;
> + bh = arr[i];
> +
> + if (buffer_uptodate(bh)) {
> + end_buffer_async_read(bh, 1);
> + continue;
> + }
> +
> + if (ext4_encrypted_inode(inode)
> + && S_ISREG(inode->i_mode)) {
> + ctx = fscrypt_get_ctx(inode, GFP_NOFS);
> + if (IS_ERR(ctx)) {
> + set_page_error:
> + SetPageError(page);
> + zero_user_segment(page, bh_offset(bh), blocksize);
> + continue;
This isn't cleaning up properly; the buffer_head is being left locked and with
BH_Async_Read set, and that means the page will never be unlocked either.
'end_buffer_async_read(bh, 0)' should do it.
> + }
> + ctx->r.bh = bh;
> + }
> +
> + bio = bio_alloc(GFP_KERNEL, 1);
GFP_NOIO
Eric
next prev parent reply other threads:[~2018-01-17 2:39 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-12 14:11 [RFC PATCH 0/8] Ext4 encryption support for blocksize < pagesize Chandan Rajendra
2018-01-12 14:11 ` [RFC PATCH 1/8] ext4: use EXT4_INODE_ENCRYPT flag to detect encrypted bio Chandan Rajendra
2018-01-12 19:04 ` Randy Dunlap
2018-01-13 5:22 ` Chandan Rajendra
2018-01-12 14:11 ` [RFC PATCH 2/8] fs/buffer.c: make some functions non-static Chandan Rajendra
2018-01-12 14:38 ` Matthew Wilcox
2018-01-13 5:25 ` Chandan Rajendra
2018-01-17 2:14 ` Eric Biggers
2018-01-12 14:11 ` [RFC PATCH 3/8] ext4: decrypt all contiguous blocks in a page Chandan Rajendra
2018-01-17 2:18 ` Eric Biggers
2018-01-12 14:11 ` [RFC PATCH 4/8] ext4: decrypt all boundary blocks when doing buffered write Chandan Rajendra
2018-01-17 2:22 ` Eric Biggers
2018-01-12 14:11 ` [RFC PATCH 5/8] ext4: decrypt the block that needs to be partially zeroed Chandan Rajendra
2018-01-17 2:23 ` Eric Biggers
2018-01-12 14:11 ` [RFC PATCH 6/8] ext4: encrypt blocks whose size is less than page size Chandan Rajendra
2018-01-17 2:33 ` Eric Biggers
2018-01-12 14:11 ` [RFC PATCH 7/8] ext4: decrypt " Chandan Rajendra
2018-01-17 2:39 ` Eric Biggers [this message]
2018-01-12 14:11 ` [RFC PATCH 8/8] ext4: enable encryption for blocksize " Chandan Rajendra
2018-01-17 2:40 ` Eric Biggers
2018-01-17 13:42 ` Chandan Rajendra
2018-01-12 19:07 ` [RFC PATCH 0/8] Ext4 encryption support for blocksize < pagesize Randy Dunlap
2018-01-13 5:28 ` Chandan Rajendra
2018-01-13 12:48 ` Matthew Wilcox
2018-01-13 18:20 ` Randy Dunlap
2018-01-17 2:10 ` Eric Biggers
2018-01-17 13:43 ` Chandan Rajendra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180117023925.GG4477@zzz.localdomain \
--to=ebiggers3@gmail.com \
--cc=chandan@linux.vnet.ibm.com \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).