linux-api.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC 0/1] futex: Add a new robust list syscall
@ 2024-10-24 14:57 André Almeida
  2024-10-24 14:57 ` [PATCH RFC 1/1] futex: Create set_robust_list2 André Almeida
  0 siblings, 1 reply; 7+ messages in thread
From: André Almeida @ 2024-10-24 14:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, Arnd Bergmann, sonicadvance1
  Cc: linux-kernel, kernel-dev, linux-api, André Almeida

This patch adds a new robust_list() syscall. The current syscall
can't be expanded to cover the following use case, so a new one is
needed. This new syscall allows users to set multiple robust lists per
process and to have either 32bit or 64bit pointers in the list.

* Use case

FEX-Emu[1] is an application that runs x86 and x86-64 binaries on an
AArch64 Linux host. One of the tasks of FEX-Emu is to translate syscalls
from one platform to another. Existing set_robust_list() can't be easily
translated because of two limitations:

1) x86 apps can have 32bit pointers robust lists. For a x86-64 kernel
   this is not a problem, because of the compat entry point. But there's
   no such compat entry point for AArch64, so the kernel would do the
   pointer arithmetic wrongly. Is also unviable to userspace to keep
   track every addition/removal to the robust list and keep a 64bit
   version of it somewhere else to feed the kernel. Thus, the new
   interface has an option of telling the kernel if the list is filled
   with 32bit or 64bit pointers.

2) Apps can set just one robust list (in theory, x86-64 can set two if
   they also use the compat entry point). That means that when a x86 app
   asks FEX-Emu to call set_robust_list(), FEX have two options: to
   overwrite their own robust list pointer and make the app robust, or
   to ignore the app robust list and keep the emulator robust. The new
   interface allows for multiple robust lists per application, solving
   this.

* Interface

This is the proposed interface:

	long set_robust_list2(void *head, int index, unsigned int flags)

`head` is the head of the userspace struct robust_list_head, just as old
set_robust_list(). It needs to be a void pointer since it can point to a normal
robust_list_head or a compat_robust_list_head.

`flags` can be used for defining the list type:

	enum robust_list_type {
	 	ROBUST_LIST_32BIT,
		ROBUST_LIST_64BIT,
	 };

`index` is the index in the internal robust_list's linked list (the naming
starts to get confusing, I reckon). If `index == -1`, that means that user wants
to set a new robust_list, and the kernel will append it in the end of the list,
assign a new index and return this index to the user. If `index >= 0`, that
means that user wants to re-set `*head` of an already existing list (similarly
to what happens when you call set_robust_list() twice with different `*head`).

If `index` is out of range, or it points to a non-existing robust_list, or if
the internal list is full, an error is returned.

* Implementation

The implementation re-uses most of the existing robust list interface as
possible. The new task_struct member `struct list_head robust_list2` is just a
linked list where new lists are appended as the user requests more lists, and by
futex_cleanup(), the kernel walks through the internal list feeding
exit_robust_list() with the robust_list's.

This implementation supports up to 10 lists (defined at ROBUST_LISTS_PER_TASK),
but it was an arbitrary number for this RFC. For the described use case above, 4
should be enough, I'm not sure which should be the limit.

It doesn't support list removal (should it support?). It doesn't have a proper
get_robust_list2() yet as well, but I can add it in a next revision. We could
also have a generic robust_list() syscall that can be used to set/get and be
controlled by flags.

The new interface has a `unsigned int flags` argument, making it
extensible for future use cases as well.

* Testing

I will provide a selftest similar to the one I proposed for the current
interface here:
https://lore.kernel.org/lkml/20241010011142.905297-1-andrealmeid@igalia.com/

Also, FEX-Emu added support for this interface to validate it:
https://github.com/FEX-Emu/FEX/pull/3966

Feedback is very welcomed!

Thanks,
	André

[1] https://github.com/FEX-Emu/FEX

André Almeida (1):
  futex: Create set_robust_list2

 arch/arm/tools/syscall.tbl             |  1 +
 arch/x86/entry/syscalls/syscall_64.tbl |  1 +
 include/linux/futex.h                  |  1 +
 include/linux/sched.h                  |  1 +
 include/uapi/asm-generic/unistd.h      |  5 +-
 include/uapi/linux/futex.h             | 24 +++++++++
 init/init_task.c                       |  3 ++
 kernel/futex/core.c                    | 34 +++++++++---
 kernel/futex/syscalls.c                | 71 ++++++++++++++++++++++++++
 scripts/syscall.tbl                    |  1 +
 10 files changed, 133 insertions(+), 9 deletions(-)

--
2.47.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH RFC 1/1] futex: Create set_robust_list2
  2024-10-24 14:57 [PATCH RFC 0/1] futex: Add a new robust list syscall André Almeida
@ 2024-10-24 14:57 ` André Almeida
  2024-10-24 15:57   ` Arnd Bergmann
  0 siblings, 1 reply; 7+ messages in thread
From: André Almeida @ 2024-10-24 14:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, Arnd Bergmann, sonicadvance1
  Cc: linux-kernel, kernel-dev, linux-api, André Almeida

This new syscall allows to set multiple list to the same process. There
are two list types: 32 and 64 bit lists.

It supports up to 10 lists per process (see ROBUST_LISTS_PER_TASK). The
lists are dynamically allocated on demand, as part of a linked list.
This is the proposed interface:

	long set_robust_list2(void *head, int index, unsigned int flags)

Userspace can ask to set the head of a new list using (index = -1).
Kernel will allocate a new list, place in the linked list and return the
new index to userspace.

Userspace can modify an existing head by using an index >= 0. If the
requested list doesn't exist, an error is returned.

Userspace cannot remove a robust list.

For now, flag is for the list type:

	enum robust_list_type {
	 	ROBUST_LIST_32BIT,
		ROBUST_LIST_64BIT,
	 };

Signed-off-by: André Almeida <andrealmeid@igalia.com>
---
 arch/arm/tools/syscall.tbl             |  1 +
 arch/x86/entry/syscalls/syscall_64.tbl |  1 +
 include/linux/futex.h                  |  1 +
 include/linux/sched.h                  |  1 +
 include/uapi/asm-generic/unistd.h      |  5 +-
 include/uapi/linux/futex.h             | 24 +++++++++
 init/init_task.c                       |  3 ++
 kernel/futex/core.c                    | 34 +++++++++---
 kernel/futex/syscalls.c                | 72 ++++++++++++++++++++++++++
 scripts/syscall.tbl                    |  1 +
 10 files changed, 134 insertions(+), 9 deletions(-)

diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl
index 23c98203c40f..31070d427ea2 100644
--- a/arch/arm/tools/syscall.tbl
+++ b/arch/arm/tools/syscall.tbl
@@ -477,3 +477,4 @@
 460	common	lsm_set_self_attr		sys_lsm_set_self_attr
 461	common	lsm_list_modules		sys_lsm_list_modules
 462	common	mseal				sys_mseal
+463	common	set_robust_list2		sys_set_robust_list2
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index 7093ee21c0d1..fbc0cef1a97c 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -386,6 +386,7 @@
 460	common	lsm_set_self_attr	sys_lsm_set_self_attr
 461	common	lsm_list_modules	sys_lsm_list_modules
 462 	common  mseal			sys_mseal
+463	common	set_robust_list2	sys_set_robust_list2
 
 #
 # Due to a historical design error, certain syscalls are numbered differently
diff --git a/include/linux/futex.h b/include/linux/futex.h
index b70df27d7e85..9c5ab84f86a9 100644
--- a/include/linux/futex.h
+++ b/include/linux/futex.h
@@ -65,6 +65,7 @@ static inline void futex_init_task(struct task_struct *tsk)
 #ifdef CONFIG_COMPAT
 	tsk->compat_robust_list = NULL;
 #endif
+	INIT_LIST_HEAD(&tsk->robust_list2);
 	INIT_LIST_HEAD(&tsk->pi_state_list);
 	tsk->pi_state_cache = NULL;
 	tsk->futex_state = FUTEX_STATE_OK;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 449dd64ed9ac..5f72fe66add6 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1284,6 +1284,7 @@ struct task_struct {
 #ifdef CONFIG_COMPAT
 	struct compat_robust_list_head __user *compat_robust_list;
 #endif
+	struct list_head		robust_list2;
 	struct list_head		pi_state_list;
 	struct futex_pi_state		*pi_state_cache;
 	struct mutex			futex_exit_mutex;
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 5bf6148cac2b..c1f5c9635c07 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -841,8 +841,11 @@ __SYSCALL(__NR_lsm_list_modules, sys_lsm_list_modules)
 #define __NR_mseal 462
 __SYSCALL(__NR_mseal, sys_mseal)
 
+#define __NR_set_robust_list2 463
+__SYSCALL(__NR_set_robust_list2, sys_set_robust_list2)
+
 #undef __NR_syscalls
-#define __NR_syscalls 463
+#define __NR_syscalls 464
 
 /*
  * 32 bit systems traditionally used different
diff --git a/include/uapi/linux/futex.h b/include/uapi/linux/futex.h
index d2ee625ea189..13903a278b71 100644
--- a/include/uapi/linux/futex.h
+++ b/include/uapi/linux/futex.h
@@ -146,6 +146,30 @@ struct robust_list_head {
 	struct robust_list __user *list_op_pending;
 };
 
+#define ROBUST_LISTS_PER_TASK 10
+
+enum robust_list2_type {
+	ROBUST_LIST_32BIT,
+	ROBUST_LIST_64BIT,
+};
+
+#define ROBUST_LIST_TYPE_MASK (ROBUST_LIST_32BIT | ROBUST_LIST_64BIT)
+
+/*
+ * This is an entry of a linked list of robust lists.
+ *
+ * @head: can point to a 64bit list or a 32bit list
+ * @list_type: determine the size of the futex pointers in the list
+ * @index: the index of this entry in the list
+ * @list: linked list element
+ */
+struct robust_list2_entry {
+	void __user *head;
+	enum robust_list2_type list_type;
+	unsigned int index;
+	struct list_head list;
+};
+
 /*
  * Are there any waiters for this robust futex:
  */
diff --git a/init/init_task.c b/init/init_task.c
index 136a8231355a..1b08e745c47d 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -219,6 +219,9 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
 #ifdef CONFIG_SECCOMP_FILTER
 	.seccomp	= { .filter_count = ATOMIC_INIT(0) },
 #endif
+#ifdef CONFIG_FUTEX
+	.robust_list2 = LIST_HEAD_INIT(init_task.robust_list2),
+#endif
 };
 EXPORT_SYMBOL(init_task);
 
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 136768ae2637..d71b4b9630f7 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -796,9 +796,8 @@ static inline int fetch_robust_entry(struct robust_list __user **entry,
  *
  * We silently return on any sign of list-walking problem.
  */
-static void exit_robust_list(struct task_struct *curr)
+static void exit_robust_list(struct task_struct *curr, struct robust_list_head __user *head)
 {
-	struct robust_list_head __user *head = curr->robust_list;
 	struct robust_list __user *entry, *next_entry, *pending;
 	unsigned int limit = ROBUST_LIST_LIMIT, pi, pip;
 	unsigned int next_pi;
@@ -858,7 +857,6 @@ static void exit_robust_list(struct task_struct *curr)
 	}
 }
 
-#ifdef CONFIG_COMPAT
 static void __user *futex_uaddr(struct robust_list __user *entry,
 				compat_long_t futex_offset)
 {
@@ -890,9 +888,9 @@ compat_fetch_robust_entry(compat_uptr_t *uentry, struct robust_list __user **ent
  *
  * We silently return on any sign of list-walking problem.
  */
-static void compat_exit_robust_list(struct task_struct *curr)
+static void compat_exit_robust_list(struct task_struct *curr,
+				    struct compat_robust_list_head __user *head)
 {
-	struct compat_robust_list_head __user *head = curr->compat_robust_list;
 	struct robust_list __user *entry, *next_entry, *pending;
 	unsigned int limit = ROBUST_LIST_LIMIT, pi, pip;
 	unsigned int next_pi;
@@ -957,7 +955,6 @@ static void compat_exit_robust_list(struct task_struct *curr)
 		handle_futex_death(uaddr, curr, pip, HANDLE_DEATH_PENDING);
 	}
 }
-#endif
 
 #ifdef CONFIG_FUTEX_PI
 
@@ -1040,14 +1037,35 @@ static inline void exit_pi_state_list(struct task_struct *curr) { }
 
 static void futex_cleanup(struct task_struct *tsk)
 {
+	struct robust_list2_entry *curr, *n;
+	struct list_head *list2 = &tsk->robust_list2;
+
 	if (unlikely(tsk->robust_list)) {
-		exit_robust_list(tsk);
+		exit_robust_list(tsk, tsk->robust_list);
 		tsk->robust_list = NULL;
 	}
 
+	/*
+	 * Walk through the linked list, parsing robust lists and freeing the
+	 * allocated lists
+	 */
+	if (unlikely(!list_empty(list2))) {
+		list_for_each_entry_safe(curr, n, list2, list) {
+			if (curr->head != NULL) {
+				if (curr->list_type == ROBUST_LIST_64BIT)
+					exit_robust_list(tsk, curr->head);
+				else if (curr->list_type == ROBUST_LIST_32BIT)
+					compat_exit_robust_list(tsk, curr->head);
+				curr->head = NULL;
+			}
+			list_del_init(&curr->list);
+			kfree(curr);
+		}
+	}
+
 #ifdef CONFIG_COMPAT
 	if (unlikely(tsk->compat_robust_list)) {
-		compat_exit_robust_list(tsk);
+		compat_exit_robust_list(tsk, tsk->compat_robust_list);
 		tsk->compat_robust_list = NULL;
 	}
 #endif
diff --git a/kernel/futex/syscalls.c b/kernel/futex/syscalls.c
index 4b6da9116aa6..2853de204f9b 100644
--- a/kernel/futex/syscalls.c
+++ b/kernel/futex/syscalls.c
@@ -39,6 +39,78 @@ SYSCALL_DEFINE2(set_robust_list, struct robust_list_head __user *, head,
 	return 0;
 }
 
+#define ROBUST_LIST_FLAGS ROBUST_LIST_TYPE_MASK
+
+/*
+ * sys_set_robust_list2()
+ *
+ * When index == -1, create a new list for user. When index >= 0, try to find
+ * the corresponding list and re-set the head there.
+ *
+ * Return values:
+ *  >= 0: success, index of the robust list
+ *  -EINVAL: invalid flags, invalid index
+ *  -ENOENT: requested index no where to be found
+ *  -ENOMEM: error allocating new list
+ *  -ESRCH: too many allocated lists
+ */
+SYSCALL_DEFINE3(set_robust_list2, struct robust_list_head __user *, head,
+		int, index, unsigned int, flags)
+{
+	struct list_head *list2 = &current->robust_list2;
+	struct robust_list2_entry *prev, *new = NULL;
+	unsigned int type;
+
+	type = flags & ROBUST_LIST_TYPE_MASK;
+
+	if (index < -1 || index >= ROBUST_LISTS_PER_TASK)
+		return -EINVAL;
+
+	if ((flags & ~ROBUST_LIST_FLAGS) != 0)
+		return -EINVAL;
+
+	if (index == -1) {
+		if (list_empty(list2)) {
+			index = 0;
+		} else {
+			prev = list_last_entry(list2, struct robust_list2_entry, list);
+			index = prev->index + 1;
+		}
+
+		if (index >= ROBUST_LISTS_PER_TASK)
+			return -EINVAL;
+
+		new = kmalloc(sizeof(struct robust_list2_entry), GFP_KERNEL);
+		if (!new)
+			return -ENOMEM;
+
+		list_add_tail(&new->list, list2);
+		new->index = index;
+
+	} else if (index >= 0) {
+		struct robust_list2_entry *curr;
+
+		if (list_empty(list2))
+			return -ENOENT;
+
+		list_for_each_entry(curr, list2, list) {
+			if (index == curr->index) {
+				new = curr;
+				break;
+			}
+		}
+
+		if (!new)
+			return -ENOENT;
+	}
+
+	BUG_ON(!new);
+	new->head = head;
+	new->list_type = type;
+
+	return index;
+}
+
 /**
  * sys_get_robust_list() - Get the robust-futex list head of a task
  * @pid:	pid of the process [zero for current task]
diff --git a/scripts/syscall.tbl b/scripts/syscall.tbl
index 845e24eb372e..e174f6e2d521 100644
--- a/scripts/syscall.tbl
+++ b/scripts/syscall.tbl
@@ -403,3 +403,4 @@
 460	common	lsm_set_self_attr		sys_lsm_set_self_attr
 461	common	lsm_list_modules		sys_lsm_list_modules
 462	common	mseal				sys_mseal
+463	common	set_robust_list2		sys_set_robust_list2
-- 
2.47.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC 1/1] futex: Create set_robust_list2
  2024-10-24 14:57 ` [PATCH RFC 1/1] futex: Create set_robust_list2 André Almeida
@ 2024-10-24 15:57   ` Arnd Bergmann
  2024-10-24 18:03     ` André Almeida
  0 siblings, 1 reply; 7+ messages in thread
From: Arnd Bergmann @ 2024-10-24 15:57 UTC (permalink / raw)
  To: André Almeida, Thomas Gleixner, Ingo Molnar, Peter Zijlstra,
	Darren Hart, Davidlohr Bueso, sonicadvance1
  Cc: linux-kernel, kernel-dev, linux-api

On Thu, Oct 24, 2024, at 14:57, André Almeida wrote:
> This new syscall allows to set multiple list to the same process. There
> are two list types: 32 and 64 bit lists.
>
> It supports up to 10 lists per process (see ROBUST_LISTS_PER_TASK). The
> lists are dynamically allocated on demand, as part of a linked list.
> This is the proposed interface:
>
> 	long set_robust_list2(void *head, int index, unsigned int flags)
>
> Userspace can ask to set the head of a new list using (index = -1).
> Kernel will allocate a new list, place in the linked list and return the
> new index to userspace.
>
> Userspace can modify an existing head by using an index >= 0. If the
> requested list doesn't exist, an error is returned.
>
> Userspace cannot remove a robust list.
>
> For now, flag is for the list type:
>
> 	enum robust_list_type {
> 	 	ROBUST_LIST_32BIT,
> 		ROBUST_LIST_64BIT,
> 	 };
>
> Signed-off-by: André Almeida <andrealmeid@igalia.com>

Hi André,

I have no opinion on the syscall itself, but I'll comment on
the way you hook it up:

>  arch/arm/tools/syscall.tbl             |  1 +
>  arch/x86/entry/syscalls/syscall_64.tbl |  1 +

If we agree on the number, this should be added to all
architectures at the same time. In particular, when
you add it to 32-bit arm, it also needs to be in the
corresponding arch/arm64/tools/syscall_32.tbl for
compat mode.

>  include/uapi/asm-generic/unistd.h      |  5 +-

This reminds me that I need to send the patch to remove this
file, nothing should use it any more, though we still have
the copy in tools/include/uapi/asm-generic/unistd.h that
still gets referenced until the scripts are changed to
use the syscall.tbl format.

> +	if (unlikely(!list_empty(list2))) {
> +		list_for_each_entry_safe(curr, n, list2, list) {
> +			if (curr->head != NULL) {
> +				if (curr->list_type == ROBUST_LIST_64BIT)
> +					exit_robust_list(tsk, curr->head);
> +				else if (curr->list_type == ROBUST_LIST_32BIT)
> +					compat_exit_robust_list(tsk, curr->head);
> +				curr->head = NULL;
> +			}

This looks like the behavior of a 32-bit task using
ROBUST_LIST_64BIT is different on native 32-bit kernels
compared to running on compat mode.

Assuming we want them to behave the same way, did you intend
ROBUST_LIST_64BIT to refer to 64-bit pointers on 32-bit
tasks, or should they use normal word-size pointers?

     Arnd

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC 1/1] futex: Create set_robust_list2
  2024-10-24 15:57   ` Arnd Bergmann
@ 2024-10-24 18:03     ` André Almeida
  2024-10-24 18:08       ` Randy Dunlap
  2024-10-24 19:55       ` André Almeida
  0 siblings, 2 replies; 7+ messages in thread
From: André Almeida @ 2024-10-24 18:03 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Thomas Gleixner, Ingo Molnar, Davidlohr Bueso, Darren Hart,
	Peter Zijlstra, sonicadvance1, linux-kernel, kernel-dev,
	linux-api

Hi Arnd,

Em 24/10/2024 12:57, Arnd Bergmann escreveu:
> On Thu, Oct 24, 2024, at 14:57, André Almeida wrote:
>> This new syscall allows to set multiple list to the same process. There
>> are two list types: 32 and 64 bit lists.
>>
>> It supports up to 10 lists per process (see ROBUST_LISTS_PER_TASK). The
>> lists are dynamically allocated on demand, as part of a linked list.
>> This is the proposed interface:
>>
>> 	long set_robust_list2(void *head, int index, unsigned int flags)
>>
>> Userspace can ask to set the head of a new list using (index = -1).
>> Kernel will allocate a new list, place in the linked list and return the
>> new index to userspace.
>>
>> Userspace can modify an existing head by using an index >= 0. If the
>> requested list doesn't exist, an error is returned.
>>
>> Userspace cannot remove a robust list.
>>
>> For now, flag is for the list type:
>>
>> 	enum robust_list_type {
>> 	 	ROBUST_LIST_32BIT,
>> 		ROBUST_LIST_64BIT,
>> 	 };
>>
>> Signed-off-by: André Almeida <andrealmeid@igalia.com>
> 
> Hi André,
> 
> I have no opinion on the syscall itself, but I'll comment on
> the way you hook it up:
> 
>>   arch/arm/tools/syscall.tbl             |  1 +
>>   arch/x86/entry/syscalls/syscall_64.tbl |  1 +
> 
> If we agree on the number, this should be added to all
> architectures at the same time. In particular, when
> you add it to 32-bit arm, it also needs to be in the
> corresponding arch/arm64/tools/syscall_32.tbl for
> compat mode.

Ok

> 
>>   include/uapi/asm-generic/unistd.h      |  5 +-
> 
> This reminds me that I need to send the patch to remove this
> file, nothing should use it any more, though we still have
> the copy in tools/include/uapi/asm-generic/unistd.h that
> still gets referenced until the scripts are changed to
> use the syscall.tbl format.
> 
>> +	if (unlikely(!list_empty(list2))) {
>> +		list_for_each_entry_safe(curr, n, list2, list) {
>> +			if (curr->head != NULL) {
>> +				if (curr->list_type == ROBUST_LIST_64BIT)
>> +					exit_robust_list(tsk, curr->head);
>> +				else if (curr->list_type == ROBUST_LIST_32BIT)
>> +					compat_exit_robust_list(tsk, curr->head);
>> +				curr->head = NULL;
>> +			}
> 
> This looks like the behavior of a 32-bit task using
> ROBUST_LIST_64BIT is different on native 32-bit kernels
> compared to running on compat mode.
> 
> Assuming we want them to behave the same way, did you intend
> ROBUST_LIST_64BIT to refer to 64-bit pointers on 32-bit
> tasks, or should they use normal word-size pointers?

Oh right, I haven't covered that indeed. I think I would need to have 
something like:

static void exit_robust_list_64()
static void exit_robust_list_32()

And then each function would use explicit sizes for pointers. Also, I 
would rewrite the conditions to make that every combination of 64/32bit 
kernel/app calls the appropriated function.

Alternatively, we could just disable 32bit kernel/app to use the 
ROBUST_LIST_64BIT option.

Thank you for your feedback!
	André

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC 1/1] futex: Create set_robust_list2
  2024-10-24 18:03     ` André Almeida
@ 2024-10-24 18:08       ` Randy Dunlap
  2024-10-24 19:55       ` André Almeida
  1 sibling, 0 replies; 7+ messages in thread
From: Randy Dunlap @ 2024-10-24 18:08 UTC (permalink / raw)
  To: André Almeida, Arnd Bergmann
  Cc: Thomas Gleixner, Ingo Molnar, Davidlohr Bueso, Darren Hart,
	Peter Zijlstra, sonicadvance1, linux-kernel, kernel-dev,
	linux-api

Hi,

On 10/24/24 11:03 AM, André Almeida wrote:
> Hi Arnd,
> 

Please consider putting more of the cover letter explanation into the commit message.
Thanks.

-- 
~Randy


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC 1/1] futex: Create set_robust_list2
  2024-10-24 18:03     ` André Almeida
  2024-10-24 18:08       ` Randy Dunlap
@ 2024-10-24 19:55       ` André Almeida
  2024-10-25  6:33         ` Arnd Bergmann
  1 sibling, 1 reply; 7+ messages in thread
From: André Almeida @ 2024-10-24 19:55 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Thomas Gleixner, Ingo Molnar, Davidlohr Bueso, Darren Hart,
	Peter Zijlstra, sonicadvance1, linux-kernel, kernel-dev,
	linux-api

Em 24/10/2024 15:03, André Almeida escreveu:
> Hi Arnd,
> 
> Em 24/10/2024 12:57, Arnd Bergmann escreveu:
>> On Thu, Oct 24, 2024, at 14:57, André Almeida wrote:
>>> This new syscall allows to set multiple list to the same process. There
>>> are two list types: 32 and 64 bit lists.
>>>

[...]

>>> +    if (unlikely(!list_empty(list2))) {
>>> +        list_for_each_entry_safe(curr, n, list2, list) {
>>> +            if (curr->head != NULL) {
>>> +                if (curr->list_type == ROBUST_LIST_64BIT)
>>> +                    exit_robust_list(tsk, curr->head);
>>> +                else if (curr->list_type == ROBUST_LIST_32BIT)
>>> +                    compat_exit_robust_list(tsk, curr->head);
>>> +                curr->head = NULL;
>>> +            }
>>
>> This looks like the behavior of a 32-bit task using
>> ROBUST_LIST_64BIT is different on native 32-bit kernels
>> compared to running on compat mode.
>>
>> Assuming we want them to behave the same way, did you intend
>> ROBUST_LIST_64BIT to refer to 64-bit pointers on 32-bit
>> tasks, or should they use normal word-size pointers?
> 
> Oh right, I haven't covered that indeed. I think I would need to have 
> something like:
> 
> static void exit_robust_list_64()
> static void exit_robust_list_32()
> 
> And then each function would use explicit sizes for pointers. Also, I 
> would rewrite the conditions to make that every combination of 64/32bit 
> kernel/app calls the appropriated function.

Something like this:

#ifdef CONFIG_64BIT
	if (unlikely(tsk->robust_list)) {
		exit_robust_list_64bit(tsk, tsk->robust_list);
		tsk->robust_list = NULL;
	}
#else
	if (unlikely(tsk->robust_list)) {
		exit_robust_list_32bit(tsk, tsk->robust_list);
		tsk->robust_list = NULL;
	}
#endif

#ifdef CONFIG_COMPAT
	if (unlikely(tsk->compat_robust_list)) {
		exit_robust_32bit(tsk, tsk->compat_robust_list);
		tsk->compat_robust_list = NULL;
	}
#endif

	/* Simplified */
	list_for_each_entry_safe(curr, n, list2, list) {
		if (curr->list_type == ROBUST_LIST_64BIT)
			exit_robust_list_64bit(tsk, curr->head);
		else if (curr->list_type == ROBUST_LIST_32BIT)
			exit_robust_list_32bit(tsk, curr->head);
	}


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC 1/1] futex: Create set_robust_list2
  2024-10-24 19:55       ` André Almeida
@ 2024-10-25  6:33         ` Arnd Bergmann
  0 siblings, 0 replies; 7+ messages in thread
From: Arnd Bergmann @ 2024-10-25  6:33 UTC (permalink / raw)
  To: André Almeida
  Cc: Thomas Gleixner, Ingo Molnar, Davidlohr Bueso, Darren Hart,
	Peter Zijlstra, sonicadvance1, linux-kernel, kernel-dev,
	linux-api

On Thu, Oct 24, 2024, at 19:55, André Almeida wrote:
> Em 24/10/2024 15:03, André Almeida escreveu:
>> Em 24/10/2024 12:57, Arnd Bergmann escreveu:
>>> On Thu, Oct 24, 2024, at 14:57, André Almeida wrote:
>>>> This new syscall allows to set multiple list to the same process. There
>>>> are two list types: 32 and 64 bit lists.
>>> Assuming we want them to behave the same way, did you intend
>>> ROBUST_LIST_64BIT to refer to 64-bit pointers on 32-bit
>>> tasks, or should they use normal word-size pointers?
>> 
>> Oh right, I haven't covered that indeed. I think I would need to have 
>> something like:
>> 
>> static void exit_robust_list_64()
>> static void exit_robust_list_32()
>> 
>> And then each function would use explicit sizes for pointers. Also, I 
>> would rewrite the conditions to make that every combination of 64/32bit 
>> kernel/app calls the appropriated function.
>
> Something like this:
>
> #ifdef CONFIG_64BIT
> 	if (unlikely(tsk->robust_list)) {
> 		exit_robust_list_64bit(tsk, tsk->robust_list);
> 		tsk->robust_list = NULL;
> 	}
> #else
> 	if (unlikely(tsk->robust_list)) {
> 		exit_robust_list_32bit(tsk, tsk->robust_list);
> 		tsk->robust_list = NULL;
> 	}
> #endif
>
> #ifdef CONFIG_COMPAT
> 	if (unlikely(tsk->compat_robust_list)) {
> 		exit_robust_32bit(tsk, tsk->compat_robust_list);
> 		tsk->compat_robust_list = NULL;
> 	}
> #endif

This should work, but I wonder if it ends up simpler to
instead have tsk->robust_list64 and tsk->robust_list32
entries and just put the #ifdef CONFIG_64BIT around the code
accessing robust_list64.

     Arnd

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-10-25  6:34 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-24 14:57 [PATCH RFC 0/1] futex: Add a new robust list syscall André Almeida
2024-10-24 14:57 ` [PATCH RFC 1/1] futex: Create set_robust_list2 André Almeida
2024-10-24 15:57   ` Arnd Bergmann
2024-10-24 18:03     ` André Almeida
2024-10-24 18:08       ` Randy Dunlap
2024-10-24 19:55       ` André Almeida
2024-10-25  6:33         ` Arnd Bergmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).