From: Adam Borowski <kilobyte@angband.pl>
To: David Sterba <dsterba@suse.cz>,
linux-btrfs@vger.kernel.org, Mark Fasheh <mfasheh@versity.com>
Cc: Adam Borowski <kilobyte@angband.pl>
Subject: [PATCH 2/2] btrfs-progs: defrag: open files RO on new enough kernels or if root
Date: Mon, 3 Sep 2018 12:14:26 +0200 [thread overview]
Message-ID: <20180903101426.14968-2-kilobyte@angband.pl> (raw)
In-Reply-To: <20180903101426.14968-1-kilobyte@angband.pl>
Fixes EXTXBSY races.
Signed-off-by: Adam Borowski <kilobyte@angband.pl>
---
cmds-filesystem.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/cmds-filesystem.c b/cmds-filesystem.c
index 06c8311b..4c9df69f 100644
--- a/cmds-filesystem.c
+++ b/cmds-filesystem.c
@@ -26,6 +26,7 @@
#include <ftw.h>
#include <mntent.h>
#include <linux/limits.h>
+#include <linux/version.h>
#include <getopt.h>
#include <btrfsutil.h>
@@ -39,12 +40,14 @@
#include "list_sort.h"
#include "disk-io.h"
#include "help.h"
+#include "fsfeatures.h"
/*
* for btrfs fi show, we maintain a hash of fsids we've already printed.
* This way we don't print dups if a given FS is mounted more than once.
*/
static struct seen_fsid *seen_fsid_hash[SEEN_FSID_HASH_SIZE] = {NULL,};
+static mode_t defrag_ro = O_RDONLY;
static const char * const filesystem_cmd_group_usage[] = {
"btrfs filesystem [<group>] <command> [<args>]",
@@ -877,7 +880,7 @@ static int defrag_callback(const char *fpath, const struct stat *sb,
if ((typeflag == FTW_F) && S_ISREG(sb->st_mode)) {
if (defrag_global_verbose)
printf("%s\n", fpath);
- fd = open(fpath, O_RDWR);
+ fd = open(fpath, defrag_ro);
if (fd < 0) {
goto error;
}
@@ -914,6 +917,9 @@ static int cmd_filesystem_defrag(int argc, char **argv)
int compress_type = BTRFS_COMPRESS_NONE;
DIR *dirstream;
+ if (get_running_kernel_version() < KERNEL_VERSION(4,19,0) && getuid())
+ defrag_ro = O_RDWR;
+
/*
* Kernel has a different default (256K) that is supposed to be safe,
* but it does not defragment very well. The 32M will likely lead to
@@ -1014,7 +1020,7 @@ static int cmd_filesystem_defrag(int argc, char **argv)
int defrag_err = 0;
dirstream = NULL;
- fd = open_file_or_dir(argv[i], &dirstream);
+ fd = open_file_or_dir3(argv[i], &dirstream, defrag_ro);
if (fd < 0) {
error("cannot open %s: %m", argv[i]);
ret = -errno;
--
2.19.0.rc1
next prev parent reply other threads:[~2018-09-03 14:34 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-03 10:14 [PATCH 1/2] btrfs-progs: fix kernel version parsing on some versions past 3.0 Adam Borowski
2018-09-03 10:14 ` Adam Borowski [this message]
2018-09-03 11:01 ` [PATCH 2/2] btrfs-progs: defrag: open files RO on new enough kernels or if root Nikolay Borisov
2018-09-03 11:12 ` Adam Borowski
2018-09-03 11:31 ` [PATCH v2] btrfs-progs: defrag: open files RO on new enough kernels Adam Borowski
2018-09-03 11:41 ` Nikolay Borisov
2018-09-03 11:46 ` [PATCH v3] " Adam Borowski
2018-09-03 11:04 ` [PATCH 2/2] btrfs-progs: defrag: open files RO on new enough kernels or if root Nikolay Borisov
2018-09-03 11:28 ` Adam Borowski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180903101426.14968-2-kilobyte@angband.pl \
--to=kilobyte@angband.pl \
--cc=dsterba@suse.cz \
--cc=linux-btrfs@vger.kernel.org \
--cc=mfasheh@versity.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).