From: NeilBrown <neilb@suse.de>
To: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [PATCH 1/2] block_dev/DIO: Optionally allocate single 'struct dio' per file.
Date: Thu, 05 Mar 2015 10:57:39 +1100 [thread overview]
Message-ID: <20150304235739.17330.94189.stgit@notabene.brown> (raw)
In-Reply-To: <20150304234911.17330.65139.stgit@notabene.brown>
To be able to support RAID metadata operations in user-space, mdmon
(part of mdadm) sometimes needs to update the metadata on an array
before any future writes to the array are permitted. This is
particularly needed for recording a device failure.
If that array is being used for swap (and even to some extent when
just used for a filesystem) then any memory allocation performed by
mdmon can cause a deadlock if the allocation waits for data to be
written out to the array.
mdmon uses mlockall(MCL_FUTURE|MCL_CURRENT) and is careful not to
allocate any memory at the wrong time. However the kernel sometimes
allocates memory on its behalf and this can deadlock.
Updating the metadata requires an O_DIRECT write to each of a number
of files (which were previously opened). Each write requires
allocating a 'struct dio'.
To avoid this deadlock risk, this patch caches the 'struct dio' the
first time it is allocated so that future writes on the file do not
require the allocation. It is cached in '->private_data' for the
struct file. Only a single struct is cached so only sequential
accesses are allocation-free.
The caching is only performed if mlockall(MCL_FUTURE) is in effect,
thus limiting the change to only those cases where it will bring a
benefit.
Effectively, the memory allocated for O_DIRECT access is 'locked' in
place for future use.
Signed-off-by: NeilBrown <neilb@suse.de>
---
fs/block_dev.c | 7 ++++++-
fs/direct-io.c | 18 ++++++++++++++++--
include/linux/fs.h | 6 ++++++
3 files changed, 28 insertions(+), 3 deletions(-)
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 975266be67d3..ed55e5329563 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -155,7 +155,10 @@ blkdev_direct_IO(int rw, struct kiocb *iocb, struct iov_iter *iter,
return __blockdev_direct_IO(rw, iocb, inode, I_BDEV(inode), iter,
offset, blkdev_get_block,
- NULL, NULL, 0);
+ NULL, NULL,
+ current->mm &&
+ (current->mm->def_flags & VM_LOCKED)
+ ? DIO_PERSISTENT_DIO : 0);
}
int __sync_blockdev(struct block_device *bdev, int wait)
@@ -1567,6 +1570,8 @@ EXPORT_SYMBOL(blkdev_put);
static int blkdev_close(struct inode * inode, struct file * filp)
{
struct block_device *bdev = I_BDEV(filp->f_mapping->host);
+ if (filp->private_data)
+ dio_free(filp->private_data);
blkdev_put(bdev, filp->f_mode);
return 0;
}
diff --git a/fs/direct-io.c b/fs/direct-io.c
index e181b6b2e297..ece5e45933d2 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -143,6 +143,11 @@ struct dio {
static struct kmem_cache *dio_cache __read_mostly;
+void dio_free(struct dio *dio)
+{
+ kmem_cache_free(dio_cache, dio);
+}
+
/*
* How many pages are in the queue?
*/
@@ -268,7 +273,9 @@ static ssize_t dio_complete(struct dio *dio, loff_t offset, ssize_t ret,
aio_complete(dio->iocb, ret, 0);
}
- kmem_cache_free(dio_cache, dio);
+ if (!(dio->flags & DIO_PERSISTENT_DIO) ||
+ cmpxchg(&dio->iocb->ki_filp->private_data, NULL, dio) != NULL)
+ dio_free(dio);
return ret;
}
@@ -1131,7 +1138,14 @@ do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
if (rw == READ && !iov_iter_count(iter))
return 0;
- dio = kmem_cache_alloc(dio_cache, GFP_KERNEL);
+ dio = NULL;
+ if ((flags & DIO_PERSISTENT_DIO) &&
+ (dio = iocb->ki_filp->private_data) != NULL) {
+ if (cmpxchg(&iocb->ki_filp->private_data, dio, NULL) != dio)
+ dio = NULL;
+ }
+ if (!dio)
+ dio = kmem_cache_alloc(dio_cache, GFP_KERNEL);
retval = -ENOMEM;
if (!dio)
goto out;
diff --git a/include/linux/fs.h b/include/linux/fs.h
index b4d71b5e1ff2..b821fa32ba3f 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -52,6 +52,7 @@ struct seq_file;
struct workqueue_struct;
struct iov_iter;
struct vm_fault;
+struct dio;
extern void __init inode_init(void);
extern void __init inode_init_early(void);
@@ -2612,9 +2613,14 @@ enum {
/* filesystem can handle aio writes beyond i_size */
DIO_ASYNC_EXTEND = 0x04,
+
+ /* file->private_data is used to store a 'struct dio'
+ * between calls */
+ DIO_PERSISTENT_DIO = 0x08,
};
void dio_end_io(struct bio *bio, int error);
+void dio_free(struct dio *dio);
ssize_t __blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
struct block_device *bdev, struct iov_iter *iter, loff_t offset,
next prev parent reply other threads:[~2015-03-04 23:57 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-04 23:57 [PATCH 0/2] Avoid memory allocation for O_DIRECT IO NeilBrown
2015-03-04 23:57 ` NeilBrown [this message]
2015-03-04 23:57 ` [PATCH 2/2] block_dev/DIO - cache one bio allocation when caching a DIO NeilBrown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150304235739.17330.94189.stgit@notabene.brown \
--to=neilb@suse.de \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).