* [PATCH AUTOSEL 6.17-5.4] ceph: add checking of wait_for_completion_killable() return value
[not found] <20251026144958.26750-1-sashal@kernel.org>
@ 2025-10-26 14:48 ` Sasha Levin
2025-10-26 14:49 ` [PATCH AUTOSEL 6.17-6.12] ceph: fix multifs mds auth caps issue Sasha Levin
` (2 subsequent siblings)
3 siblings, 0 replies; 4+ messages in thread
From: Sasha Levin @ 2025-10-26 14:48 UTC (permalink / raw)
To: patches, stable
Cc: Viacheslav Dubeyko, Alex Markuze, Ilya Dryomov, Sasha Levin,
xiubli, ceph-devel
From: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
[ Upstream commit b7ed1e29cfe773d648ca09895b92856bd3a2092d ]
The Coverity Scan service has detected the calling of
wait_for_completion_killable() without checking the return
value in ceph_lock_wait_for_completion() [1]. The CID 1636232
defect contains explanation: "If the function returns an error
value, the error value may be mistaken for a normal value.
In ceph_lock_wait_for_completion(): Value returned from
a function is not checked for errors before being used. (CWE-252)".
The patch adds the checking of wait_for_completion_killable()
return value and return the error code from
ceph_lock_wait_for_completion().
[1] https://scan5.scan.coverity.com/#/project-view/64304/10063?selectedIssue=1636232
Signed-off-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
Reviewed-by: Alex Markuze <amarkuze@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
LLM Generated explanations, may be completely bogus:
YES – returning the error from the killable wait is required for correct
abort handling in the Ceph MDS client.
- `fs/ceph/locks.c:224-226` now propagates the
`wait_for_completion_killable()` status instead of unconditionally
succeeding. This covers the case where a task waiting for
`req->r_safe_completion` is interrupted by a fatal signal (the helper
returns `-ERESTARTSYS` per `kernel/sched/completion.c`), so
`ceph_lock_wait_for_completion()` no longer hides that failure.
- `ceph_mdsc_wait_request()` relies on the wait callback’s return code
to drive error cleanup (`fs/ceph/mds_client.c:3761-3776`): only when
the callback returns `< 0` does it set `CEPH_MDS_R_ABORTED`, preserve
the error, and call `ceph_invalidate_dir_request()` for write-style
operations. With the old code the callback always returned 0, so a
second signal during the safe-completion wait would skip that abort
path even though `req->r_err` eventually propagates a failure; in
turn, the caller could observe stale directory state and inconsistent
locking semantics.
- The change is tiny, affects only the Ceph lock abort path, and has no
dependencies. It keeps normal success cases untouched (`err == 0`
still returns early) while making the error handling consistent.
- Given it fixes a real user-visible bug (signals during lock abort
losing associated cleanup) with negligible regression risk, it’s a
good candidate for the stable series.
fs/ceph/locks.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c
index ebf4ac0055ddc..dd764f9c64b9f 100644
--- a/fs/ceph/locks.c
+++ b/fs/ceph/locks.c
@@ -221,7 +221,10 @@ static int ceph_lock_wait_for_completion(struct ceph_mds_client *mdsc,
if (err && err != -ERESTARTSYS)
return err;
- wait_for_completion_killable(&req->r_safe_completion);
+ err = wait_for_completion_killable(&req->r_safe_completion);
+ if (err)
+ return err;
+
return 0;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 4+ messages in thread* [PATCH AUTOSEL 6.17-6.12] ceph: fix multifs mds auth caps issue
[not found] <20251026144958.26750-1-sashal@kernel.org>
2025-10-26 14:48 ` [PATCH AUTOSEL 6.17-5.4] ceph: add checking of wait_for_completion_killable() return value Sasha Levin
@ 2025-10-26 14:49 ` Sasha Levin
2025-10-26 14:49 ` [PATCH AUTOSEL 6.17-6.6] ceph: refactor wake_up_bit() pattern of calling Sasha Levin
2025-10-26 14:49 ` [PATCH AUTOSEL 6.17-6.12] ceph: fix potential race condition in ceph_ioctl_lazyio() Sasha Levin
3 siblings, 0 replies; 4+ messages in thread
From: Sasha Levin @ 2025-10-26 14:49 UTC (permalink / raw)
To: patches, stable
Cc: Kotresh HR, Viacheslav Dubeyko, Ilya Dryomov, Sasha Levin, xiubli,
ceph-devel
From: Kotresh HR <khiremat@redhat.com>
[ Upstream commit 22c73d52a6d05c5a2053385c0d6cd9984732799d ]
The mds auth caps check should also validate the
fsname along with the associated caps. Not doing
so would result in applying the mds auth caps of
one fs on to the other fs in a multifs ceph cluster.
The bug causes multiple issues w.r.t user
authentication, following is one such example.
Steps to Reproduce (on vstart cluster):
1. Create two file systems in a cluster, say 'fsname1' and 'fsname2'
2. Authorize read only permission to the user 'client.usr' on fs 'fsname1'
$ceph fs authorize fsname1 client.usr / r
3. Authorize read and write permission to the same user 'client.usr' on fs 'fsname2'
$ceph fs authorize fsname2 client.usr / rw
4. Update the keyring
$ceph auth get client.usr >> ./keyring
With above permssions for the user 'client.usr', following is the
expectation.
a. The 'client.usr' should be able to only read the contents
and not allowed to create or delete files on file system 'fsname1'.
b. The 'client.usr' should be able to read/write on file system 'fsname2'.
But, with this bug, the 'client.usr' is allowed to read/write on file
system 'fsname1'. See below.
5. Mount the file system 'fsname1' with the user 'client.usr'
$sudo bin/mount.ceph usr@.fsname1=/ /kmnt_fsname1_usr/
6. Try creating a file on file system 'fsname1' with user 'client.usr'. This
should fail but passes with this bug.
$touch /kmnt_fsname1_usr/file1
7. Mount the file system 'fsname1' with the user 'client.admin' and create a
file.
$sudo bin/mount.ceph admin@.fsname1=/ /kmnt_fsname1_admin
$echo "data" > /kmnt_fsname1_admin/admin_file1
8. Try removing an existing file on file system 'fsname1' with the user
'client.usr'. This shoudn't succeed but succeeds with the bug.
$rm -f /kmnt_fsname1_usr/admin_file1
For more information, please take a look at the corresponding mds/fuse patch
and tests added by looking into the tracker mentioned below.
v2: Fix a possible null dereference in doutc
v3: Don't store fsname from mdsmap, validate against
ceph_mount_options's fsname and use it
v4: Code refactor, better warning message and
fix possible compiler warning
[ Slava.Dubeyko: "fsname check failed" -> "fsname mismatch" ]
Link: https://tracker.ceph.com/issues/72167
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
LLM Generated explanations, may be completely bogus:
YES – this patch plugs a real permissions hole in CephFS multi-FS
deployments with a contained, low-risk change.
- `fs/ceph/mds_client.c:5658-5669` now refuses an auth-cap entry when
its `match.fs_name` differs from the mounted namespace; without this
guard the client can borrow read/write caps from another filesystem
and escalate privileges exactly as shown in the repro.
- `fs/ceph/mdsmap.c:355-371` rejects an incoming MDS map whose `fs_name`
does not match the selected namespace, preventing the client from ever
switching to the wrong filesystem context before the cap check runs.
- `fs/ceph/super.h:107-118` just moves the existing `namespace_equals()`
helper so it can be reused; there’s no behavioral change beyond
sharing the check.
The fix is tightly scoped to the Ceph client, doesn’t introduce new
APIs, and gracefully falls back when older servers omit `fs_name` (the
compare is skipped). The only new failure mode is refusing to mount/use
the wrong filesystem, which is precisely what we want. This is a
security-relevant bug fix that meets the stable tree criteria;
backporting is advisable. Recommended follow-up after backport: run the
multi-filesystem auth-cap scenario from the tracker to confirm the
regression is gone.
fs/ceph/mds_client.c | 8 ++++++++
fs/ceph/mdsmap.c | 14 +++++++++++++-
fs/ceph/super.c | 14 --------------
fs/ceph/super.h | 14 ++++++++++++++
4 files changed, 35 insertions(+), 15 deletions(-)
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index 3bc72b47fe4d4..3efbc11596e00 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -5649,11 +5649,19 @@ static int ceph_mds_auth_match(struct ceph_mds_client *mdsc,
u32 caller_uid = from_kuid(&init_user_ns, cred->fsuid);
u32 caller_gid = from_kgid(&init_user_ns, cred->fsgid);
struct ceph_client *cl = mdsc->fsc->client;
+ const char *fs_name = mdsc->fsc->mount_options->mds_namespace;
const char *spath = mdsc->fsc->mount_options->server_path;
bool gid_matched = false;
u32 gid, tlen, len;
int i, j;
+ doutc(cl, "fsname check fs_name=%s match.fs_name=%s\n",
+ fs_name, auth->match.fs_name ? auth->match.fs_name : "");
+ if (auth->match.fs_name && strcmp(auth->match.fs_name, fs_name)) {
+ /* fsname mismatch, try next one */
+ return 0;
+ }
+
doutc(cl, "match.uid %lld\n", auth->match.uid);
if (auth->match.uid != MDS_AUTH_UID_ANY) {
if (auth->match.uid != caller_uid)
diff --git a/fs/ceph/mdsmap.c b/fs/ceph/mdsmap.c
index 8109aba66e023..2c7b151a7c95c 100644
--- a/fs/ceph/mdsmap.c
+++ b/fs/ceph/mdsmap.c
@@ -353,10 +353,22 @@ struct ceph_mdsmap *ceph_mdsmap_decode(struct ceph_mds_client *mdsc, void **p,
__decode_and_drop_type(p, end, u8, bad_ext);
}
if (mdsmap_ev >= 8) {
+ u32 fsname_len;
/* enabled */
ceph_decode_8_safe(p, end, m->m_enabled, bad_ext);
/* fs_name */
- ceph_decode_skip_string(p, end, bad_ext);
+ ceph_decode_32_safe(p, end, fsname_len, bad_ext);
+
+ /* validate fsname against mds_namespace */
+ if (!namespace_equals(mdsc->fsc->mount_options, *p,
+ fsname_len)) {
+ pr_warn_client(cl, "fsname %*pE doesn't match mds_namespace %s\n",
+ (int)fsname_len, (char *)*p,
+ mdsc->fsc->mount_options->mds_namespace);
+ goto bad;
+ }
+ /* skip fsname after validation */
+ ceph_decode_skip_n(p, end, fsname_len, bad);
}
/* damaged */
if (mdsmap_ev >= 9) {
diff --git a/fs/ceph/super.c b/fs/ceph/super.c
index c3eb651862c55..ebef5244ae25a 100644
--- a/fs/ceph/super.c
+++ b/fs/ceph/super.c
@@ -246,20 +246,6 @@ static void canonicalize_path(char *path)
path[j] = '\0';
}
-/*
- * Check if the mds namespace in ceph_mount_options matches
- * the passed in namespace string. First time match (when
- * ->mds_namespace is NULL) is treated specially, since
- * ->mds_namespace needs to be initialized by the caller.
- */
-static int namespace_equals(struct ceph_mount_options *fsopt,
- const char *namespace, size_t len)
-{
- return !(fsopt->mds_namespace &&
- (strlen(fsopt->mds_namespace) != len ||
- strncmp(fsopt->mds_namespace, namespace, len)));
-}
-
static int ceph_parse_old_source(const char *dev_name, const char *dev_name_end,
struct fs_context *fc)
{
diff --git a/fs/ceph/super.h b/fs/ceph/super.h
index cf176aab0f823..4ac6561285b18 100644
--- a/fs/ceph/super.h
+++ b/fs/ceph/super.h
@@ -104,6 +104,20 @@ struct ceph_mount_options {
struct fscrypt_dummy_policy dummy_enc_policy;
};
+/*
+ * Check if the mds namespace in ceph_mount_options matches
+ * the passed in namespace string. First time match (when
+ * ->mds_namespace is NULL) is treated specially, since
+ * ->mds_namespace needs to be initialized by the caller.
+ */
+static inline int namespace_equals(struct ceph_mount_options *fsopt,
+ const char *namespace, size_t len)
+{
+ return !(fsopt->mds_namespace &&
+ (strlen(fsopt->mds_namespace) != len ||
+ strncmp(fsopt->mds_namespace, namespace, len)));
+}
+
/* mount state */
enum {
CEPH_MOUNT_MOUNTING,
--
2.51.0
^ permalink raw reply related [flat|nested] 4+ messages in thread* [PATCH AUTOSEL 6.17-6.6] ceph: refactor wake_up_bit() pattern of calling
[not found] <20251026144958.26750-1-sashal@kernel.org>
2025-10-26 14:48 ` [PATCH AUTOSEL 6.17-5.4] ceph: add checking of wait_for_completion_killable() return value Sasha Levin
2025-10-26 14:49 ` [PATCH AUTOSEL 6.17-6.12] ceph: fix multifs mds auth caps issue Sasha Levin
@ 2025-10-26 14:49 ` Sasha Levin
2025-10-26 14:49 ` [PATCH AUTOSEL 6.17-6.12] ceph: fix potential race condition in ceph_ioctl_lazyio() Sasha Levin
3 siblings, 0 replies; 4+ messages in thread
From: Sasha Levin @ 2025-10-26 14:49 UTC (permalink / raw)
To: patches, stable
Cc: Viacheslav Dubeyko, Alex Markuze, Ilya Dryomov, Sasha Levin,
xiubli, ceph-devel
From: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
[ Upstream commit 53db6f25ee47cb1265141d31562604e56146919a ]
The wake_up_bit() is called in ceph_async_unlink_cb(),
wake_async_create_waiters(), and ceph_finish_async_create().
It makes sense to switch on clear_bit() function, because
it makes the code much cleaner and easier to understand.
More important rework is the adding of smp_mb__after_atomic()
memory barrier after the bit modification and before
wake_up_bit() call. It can prevent potential race condition
of accessing the modified bit in other threads. Luckily,
clear_and_wake_up_bit() already implements the required
functionality pattern:
static inline void clear_and_wake_up_bit(int bit, unsigned long *word)
{
clear_bit_unlock(bit, word);
/* See wake_up_bit() for which memory barrier you need to use. */
smp_mb__after_atomic();
wake_up_bit(word, bit);
}
Signed-off-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
Reviewed-by: Alex Markuze <amarkuze@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
LLM Generated explanations, may be completely bogus:
YES – this patch fixes a real wakeup race in the Ceph client and is well
suited for stable backporting.
**Why Backport**
- clear_and_wake_up_bit at fs/ceph/dir.c:1263 and fs/ceph/file.c:582,764
replaces the ad-hoc clear+wake sequence and injects the
smp_mb__after_atomic() that wake_up_bit() demands, preventing stale-
bit wakeups.
- Waiters in ceph_wait_on_async_create (fs/ceph/mds_client.h:657-662)
and ceph_wait_on_conflict_unlink (fs/ceph/mds_client.c:836-885) depend
on the bit being visible before the wake; otherwise wake_bit_function
(kernel/sched/wait_bit.c:24-34) refuses to wake them and those threads
can hang.
- Leveraging the helper also gives the release semantics of
clear_bit_unlock (include/linux/wait_bit.h:550-556), so any state
published before clearing di->flags or ci->i_ceph_flags becomes
observable to the awakened waiters.
**Risk**
- Change is limited to swapping in an existing core helper within Ceph
flag handling, so functional risk is minimal and there are no external
dependencies.
Next steps: 1) If possible, run CephFS async create/unlink regression
tests or exercise the workloads that originally hit the wait-on-bit
stalls.
fs/ceph/dir.c | 3 +--
fs/ceph/file.c | 6 ++----
2 files changed, 3 insertions(+), 6 deletions(-)
diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
index 32973c62c1a23..d18c0eaef9b7e 100644
--- a/fs/ceph/dir.c
+++ b/fs/ceph/dir.c
@@ -1260,8 +1260,7 @@ static void ceph_async_unlink_cb(struct ceph_mds_client *mdsc,
spin_unlock(&fsc->async_unlink_conflict_lock);
spin_lock(&dentry->d_lock);
- di->flags &= ~CEPH_DENTRY_ASYNC_UNLINK;
- wake_up_bit(&di->flags, CEPH_DENTRY_ASYNC_UNLINK_BIT);
+ clear_and_wake_up_bit(CEPH_DENTRY_ASYNC_UNLINK_BIT, &di->flags);
spin_unlock(&dentry->d_lock);
synchronize_rcu();
diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index 978acd3d4b329..d7b943feb9320 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@ -579,8 +579,7 @@ static void wake_async_create_waiters(struct inode *inode,
spin_lock(&ci->i_ceph_lock);
if (ci->i_ceph_flags & CEPH_I_ASYNC_CREATE) {
- ci->i_ceph_flags &= ~CEPH_I_ASYNC_CREATE;
- wake_up_bit(&ci->i_ceph_flags, CEPH_ASYNC_CREATE_BIT);
+ clear_and_wake_up_bit(CEPH_ASYNC_CREATE_BIT, &ci->i_ceph_flags);
if (ci->i_ceph_flags & CEPH_I_ASYNC_CHECK_CAPS) {
ci->i_ceph_flags &= ~CEPH_I_ASYNC_CHECK_CAPS;
@@ -762,8 +761,7 @@ static int ceph_finish_async_create(struct inode *dir, struct inode *inode,
}
spin_lock(&dentry->d_lock);
- di->flags &= ~CEPH_DENTRY_ASYNC_CREATE;
- wake_up_bit(&di->flags, CEPH_DENTRY_ASYNC_CREATE_BIT);
+ clear_and_wake_up_bit(CEPH_DENTRY_ASYNC_CREATE_BIT, &di->flags);
spin_unlock(&dentry->d_lock);
return ret;
--
2.51.0
^ permalink raw reply related [flat|nested] 4+ messages in thread* [PATCH AUTOSEL 6.17-6.12] ceph: fix potential race condition in ceph_ioctl_lazyio()
[not found] <20251026144958.26750-1-sashal@kernel.org>
` (2 preceding siblings ...)
2025-10-26 14:49 ` [PATCH AUTOSEL 6.17-6.6] ceph: refactor wake_up_bit() pattern of calling Sasha Levin
@ 2025-10-26 14:49 ` Sasha Levin
3 siblings, 0 replies; 4+ messages in thread
From: Sasha Levin @ 2025-10-26 14:49 UTC (permalink / raw)
To: patches, stable
Cc: Viacheslav Dubeyko, Alex Markuze, Ilya Dryomov, Sasha Levin,
xiubli, ceph-devel
From: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
[ Upstream commit 5824ccba9a39a3ad914fc9b2972a2c1119abaac9 ]
The Coverity Scan service has detected potential
race condition in ceph_ioctl_lazyio() [1].
The CID 1591046 contains explanation: "Check of thread-shared
field evades lock acquisition (LOCK_EVASION). Thread1 sets
fmode to a new value. Now the two threads have an inconsistent
view of fmode and updates to fields correlated with fmode
may be lost. The data guarded by this critical section may
be read while in an inconsistent state or modified by multiple
racing threads. In ceph_ioctl_lazyio: Checking the value of
a thread-shared field outside of a locked region to determine
if a locked operation involving that thread shared field
has completed. (CWE-543)".
The patch places fi->fmode field access under ci->i_ceph_lock
protection. Also, it introduces the is_file_already_lazy
variable that is set under the lock and it is checked later
out of scope of critical section.
[1] https://scan5.scan.coverity.com/#/project-view/64304/10063?selectedIssue=1591046
Signed-off-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
Reviewed-by: Alex Markuze <amarkuze@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
LLM Generated explanations, may be completely bogus:
YES
- The race arises because the pre-patch code reads `fi->fmode` without
holding `ci->i_ceph_lock`, so two threads can both see the lazy bit
clear, then each increment
`ci->i_nr_by_mode[ffs(CEPH_FILE_MODE_LAZY)]++` before either releases
the lock. That leaves the counter permanently elevated,
desynchronising the per-mode counts that `ceph_put_fmode()` relies on
to drop capability refs (`fs/ceph/ioctl.c` before line 212, contrasted
with `fs/ceph/caps.c:4744-4789`).
- The patch now performs the test and update while the lock is held
(`fs/ceph/ioctl.c:212-220`), eliminating the window where concurrent
callers can both act on stale state; the new `is_file_already_lazy`
flag preserves the existing logging/`ceph_check_caps()` calls after
the lock is released (`fs/ceph/ioctl.c:221-228`) so behaviour remains
unchanged aside from closing the race.
- Keeping `i_nr_by_mode` accurate is important beyond metrics: it feeds
`__ceph_caps_file_wanted()` when deciding what caps to request or drop
(`fs/ceph/caps.c:1006-1061`). With the race, a leaked lazy count
prevents the last close path from seeing the inode as idle, delaying
capability release and defeating the lazyio semantics the ioctl is
supposed to provide.
- The change is tightly scoped (one function, no API or struct changes,
same code paths still call `__ceph_touch_fmode()` and
`ceph_check_caps()`), so regression risk is minimal while the fix
hardens a locking invariant already respected by other fmode
transitions such as `ceph_get_fmode()` (`fs/ceph/caps.c:4727-4754`).
- No newer infrastructure is required—the fields, lock, and helpers
touched here have existed in long-term stable kernels—so this bug fix
is suitable for stable backporting despite the likely need to adjust
the `doutc` helper name on older branches.
fs/ceph/ioctl.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/fs/ceph/ioctl.c b/fs/ceph/ioctl.c
index e861de3c79b9e..15cde055f3da1 100644
--- a/fs/ceph/ioctl.c
+++ b/fs/ceph/ioctl.c
@@ -246,21 +246,28 @@ static long ceph_ioctl_lazyio(struct file *file)
struct ceph_inode_info *ci = ceph_inode(inode);
struct ceph_mds_client *mdsc = ceph_inode_to_fs_client(inode)->mdsc;
struct ceph_client *cl = mdsc->fsc->client;
+ bool is_file_already_lazy = false;
+ spin_lock(&ci->i_ceph_lock);
if ((fi->fmode & CEPH_FILE_MODE_LAZY) == 0) {
- spin_lock(&ci->i_ceph_lock);
fi->fmode |= CEPH_FILE_MODE_LAZY;
ci->i_nr_by_mode[ffs(CEPH_FILE_MODE_LAZY)]++;
__ceph_touch_fmode(ci, mdsc, fi->fmode);
- spin_unlock(&ci->i_ceph_lock);
+ } else {
+ is_file_already_lazy = true;
+ }
+ spin_unlock(&ci->i_ceph_lock);
+
+ if (is_file_already_lazy) {
+ doutc(cl, "file %p %p %llx.%llx already lazy\n", file, inode,
+ ceph_vinop(inode));
+ } else {
doutc(cl, "file %p %p %llx.%llx marked lazy\n", file, inode,
ceph_vinop(inode));
ceph_check_caps(ci, 0);
- } else {
- doutc(cl, "file %p %p %llx.%llx already lazy\n", file, inode,
- ceph_vinop(inode));
}
+
return 0;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 4+ messages in thread