linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/3] lib/list_batch: A simple list insertion/deletion batching facility
@ 2016-01-26 16:03 Waiman Long
  2016-01-26 16:03 ` [RFC PATCH 1/3] " Waiman Long
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Waiman Long @ 2016-01-26 16:03 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Alexander Viro
  Cc: linux-fsdevel, x86, linux-kernel, Peter Zijlstra, Scott J Norton,
	Douglas Hatch, Waiman Long

This patchset introduces a simple list insertion/deletion batching
facility to batch multiple list insertion and deletion operations
into a single one under one lock/unlock critical section.

Patch 1 introduces this new facility.

Patch 2 enables it for the x86-64 architecture.

Patch 3 makes the insertion and deletion of the VFS superblock's
inode list to use the new list batching functions.

Waiman Long (3):
  lib/list_batch: A simple list insertion/deletion batching facility
  lib/list_batch, x86: Enable list insertion/deletion batching in
    x86-64
  vfs: Enable list batching for the superblock's inode list

 arch/x86/Kconfig           |    1 +
 fs/inode.c                 |   13 ++---
 fs/super.c                 |    1 +
 include/linux/fs.h         |    2 +
 include/linux/list_batch.h |  120 ++++++++++++++++++++++++++++++++++++++++++++
 lib/Kconfig                |    7 +++
 lib/Makefile               |    1 +
 lib/list_batch.c           |  117 ++++++++++++++++++++++++++++++++++++++++++
 8 files changed, 254 insertions(+), 8 deletions(-)
 create mode 100644 include/linux/list_batch.h
 create mode 100644 lib/list_batch.c


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH 1/3] lib/list_batch: A simple list insertion/deletion batching facility
  2016-01-26 16:03 [RFC PATCH 0/3] lib/list_batch: A simple list insertion/deletion batching facility Waiman Long
@ 2016-01-26 16:03 ` Waiman Long
  2016-01-27 16:34   ` Peter Zijlstra
  2016-01-26 16:03 ` [RFC PATCH 2/3] lib/list_batch, x86: Enable list insertion/deletion batching in x86-64 Waiman Long
  2016-01-26 16:03 ` [RFC PATCH 3/3] vfs: Enable list batching for the superblock's inode list Waiman Long
  2 siblings, 1 reply; 12+ messages in thread
From: Waiman Long @ 2016-01-26 16:03 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Alexander Viro
  Cc: linux-fsdevel, x86, linux-kernel, Peter Zijlstra, Scott J Norton,
	Douglas Hatch, Waiman Long

Linked list insertion or deletion under lock is a very common activity
in the Linux kernel. If this is the only activity under lock, the
locking overhead can be pretty large compared with the actual time
spent on the insertion or deletion operation itself especially on a
large system with many CPUs.

This patch introduces a simple list insertion/deletion batching
facility where a group of list insertion and deletion operations are
grouped together in a single batch under lock. This can reduce the
locking overhead and improve overall system performance.

The fast path of this batching facility will be similar in performance
to the "lock; listop; unlock;" sequence of the existing code. If
the lock is not available, it will enter slowpath where the batching
happens.

A new config option LIST_BATCHING is added so that we can control on
which architecture do we want to have this facility enabled.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
---
 include/linux/list_batch.h |  120 ++++++++++++++++++++++++++++++++++++++++++++
 lib/Kconfig                |    7 +++
 lib/Makefile               |    1 +
 lib/list_batch.c           |  117 ++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 245 insertions(+), 0 deletions(-)
 create mode 100644 include/linux/list_batch.h
 create mode 100644 lib/list_batch.c

diff --git a/include/linux/list_batch.h b/include/linux/list_batch.h
new file mode 100644
index 0000000..b8583e7
--- /dev/null
+++ b/include/linux/list_batch.h
@@ -0,0 +1,120 @@
+/*
+ * List insertion/deletion batching facility
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2016 Hewlett-Packard Enterprise Development LP
+ *
+ * Authors: Waiman Long <waiman.long@hpe.com>
+ */
+#ifndef __LINUX_LIST_BATCH_H
+#define __LINUX_LIST_BATCH_H
+
+#include <linux/spinlock.h>
+#include <linux/list.h>
+
+/*
+ * include/linux/list_batch.h
+ *
+ * Inserting or deleting an entry from a linked list under a spinlock is a
+ * very common operation in the Linux kernel. If many CPUs are trying to
+ * grab the lock and manipulate the linked list, it can lead to significant
+ * lock contention and slow operation.
+ *
+ * This list operation batching facility is used to batch multiple list
+ * operations under one lock/unlock critical section, thus reducing the
+ * locking overhead and improving overall performance.
+ */
+enum list_batch_cmd {
+	lb_cmd_add,
+	lb_cmd_del,
+	lb_cmd_del_init
+};
+
+enum list_batch_state {
+	lb_state_waiting,	/* Node is waiting */
+	lb_state_batch,		/* Queue head to perform batch processing */
+	lb_state_done		/* Job is done */
+};
+
+struct list_batch_qnode {
+	struct list_batch_qnode	*next;
+	struct list_head	*entry;
+	enum list_batch_cmd	cmd;
+	enum list_batch_state	state;
+};
+
+struct list_batch {
+	struct list_head	*list;
+	struct list_batch_qnode *tail;
+};
+
+#define LIST_BATCH_INIT(_list)	\
+	{			\
+		.list = _list,	\
+		.tail = NULL	\
+	}
+
+static inline void list_batch_init(struct list_batch *batch,
+				   struct list_head *list)
+{
+	batch->list = list;
+	batch->tail = NULL;
+}
+
+static __always_inline void _list_batch_cmd(enum list_batch_cmd cmd,
+					    struct list_head *head,
+					    struct list_head *entry)
+{
+	if (cmd == lb_cmd_add)
+		list_add(entry, head);
+	else if (cmd == lb_cmd_del)
+		list_del(entry);
+	else /* cmd == lb_cmd_del_init */
+		list_del_init(entry);
+}
+
+#ifdef CONFIG_LIST_BATCHING
+
+extern void do_list_batch_slowpath(enum list_batch_cmd cmd, spinlock_t *lock,
+				   struct list_batch *batch,
+				   struct list_head *entry);
+
+static inline void do_list_batch(enum list_batch_cmd cmd, spinlock_t *lock,
+				   struct list_batch *batch,
+				   struct list_head *entry)
+{
+	/*
+	 * Fast path
+	 */
+	if (spin_trylock(lock)) {
+		_list_batch_cmd(cmd, batch->list, entry);
+		spin_unlock(lock);
+		return;
+	}
+	do_list_batch_slowpath(cmd, lock, batch, entry);
+}
+
+
+#else /* CONFIG_LIST_BATCHING */
+
+static inline void do_list_batch(enum list_batch_cmd cmd, spinlock_t *lock,
+				   struct list_batch *batch,
+				   struct list_head *entry)
+{
+	spin_lock(lock);
+	_list_batch_cmd(cmd, batch->list, entry);
+	spin_unlock(lock);
+}
+
+#endif /* CONFIG_LIST_BATCHING */
+
+#endif /* __LINUX_LIST_BATCH_H */
diff --git a/lib/Kconfig b/lib/Kconfig
index 133ebc0..d75ce19 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -514,6 +514,13 @@ config OID_REGISTRY
 config UCS2_STRING
         tristate
 
+config LIST_BATCHING
+	def_bool y if ARCH_USE_LIST_BATCHING
+	depends on SMP
+
+config ARCH_USE_LIST_BATCHING
+	bool
+
 source "lib/fonts/Kconfig"
 
 config SG_SPLIT
diff --git a/lib/Makefile b/lib/Makefile
index a7c26a4..2791262 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -210,6 +210,7 @@ quiet_cmd_build_OID_registry = GEN     $@
 clean-files	+= oid_registry_data.c
 
 obj-$(CONFIG_UCS2_STRING) += ucs2_string.o
+obj-$(CONFIG_LIST_BATCHING) += list_batch.o
 obj-$(CONFIG_UBSAN) += ubsan.o
 
 UBSAN_SANITIZE_ubsan.o := n
diff --git a/lib/list_batch.c b/lib/list_batch.c
new file mode 100644
index 0000000..ac51d49
--- /dev/null
+++ b/lib/list_batch.c
@@ -0,0 +1,117 @@
+/*
+ * List insertion/deletion batching facility
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2016 Hewlett-Packard Enterprise Development LP
+ *
+ * Authors: Waiman Long <waiman.long@hpe.com>
+ */
+#include <linux/list_batch.h>
+
+/*
+ * List processing batch size = 128
+ *
+ * The batch size shouldn't be too large. Otherwise, it will be too unfair
+ * to the task doing the batch processing. It shouldn't be too small neither
+ * as the performance benefit will be reduced.
+ */
+#define LB_BATCH_SIZE	(1 << 7)
+
+/*
+ * Inserting or deleting an entry from a linked list under a spinlock is a
+ * very common operation in the Linux kernel. If many CPUs are trying to
+ * grab the lock and manipulate the linked list, it can lead to significant
+ * lock contention and slow operation.
+ *
+ * This list operation batching facility is used to batch multiple list
+ * operations under one lock/unlock critical section, thus reducing the
+ * locking overhead and improving overall performance.
+ */
+void do_list_batch_slowpath(enum list_batch_cmd cmd, spinlock_t *lock,
+			    struct list_batch *batch, struct list_head *entry)
+{
+	struct list_batch_qnode node, *prev, *next, *nptr;
+	int loop;
+
+	/*
+	 * Put itself into the list_batch queue
+	 */
+	node.next  = NULL;
+	node.entry = entry;
+	node.cmd   = cmd;
+	node.state = lb_state_waiting;
+
+	prev = xchg(&batch->tail, &node);
+
+	if (prev) {
+		WRITE_ONCE(prev->next, &node);
+		while (READ_ONCE(node.state) == lb_state_waiting)
+			cpu_relax();
+		if (node.state == lb_state_done)
+			return;
+		WARN_ON(node.state != lb_state_batch);
+	}
+
+	/*
+	 * We are now the queue head, we shold now acquire the lock and
+	 * process a batch of qnodes.
+	 */
+	loop = LB_BATCH_SIZE;
+	next = &node;
+	spin_lock(lock);
+
+do_list_again:
+	do {
+		nptr = next;
+		_list_batch_cmd(nptr->cmd, batch->list, nptr->entry);
+		next = READ_ONCE(nptr->next);
+		/*
+		 * As soon as the state is marked lb_state_done, we
+		 * can no longer assume the content of *nptr as valid.
+		 * So we have to hold off marking it done until we no
+		 * longer need its content.
+		 *
+		 * The release barrier here is to make sure that we
+		 * won't access its content after marking it done.
+		 */
+		if (next)
+			smp_store_release(&nptr->state, lb_state_done);
+	} while (--loop && next);
+	if (!next) {
+		/*
+		 * The queue tail should equal to nptr, so clear it to
+		 * mark the queue as empty.
+		 */
+		if (cmpxchg(&batch->tail, nptr, NULL) != nptr) {
+			/*
+			 * Queue not empty, wait until the next pointer is
+			 * initialized.
+			 */
+			while (!(next = READ_ONCE(nptr->next)))
+				cpu_relax();
+		}
+		/* The above cmpxchg acts as a memory barrier */
+		WRITE_ONCE(nptr->state, lb_state_done);
+	}
+	if (next) {
+		if (loop)
+			goto do_list_again;	/* More qnodes to process */
+		/*
+		 * Mark the next qnode as head to process the next batch
+		 * of qnodes. The new queue head cannot proceed until we
+		 * release the lock.
+		 */
+		WRITE_ONCE(next->state, lb_state_batch);
+	}
+	spin_unlock(lock);
+}
+EXPORT_SYMBOL_GPL(do_list_batch_slowpath);
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH 2/3] lib/list_batch, x86: Enable list insertion/deletion batching in x86-64
  2016-01-26 16:03 [RFC PATCH 0/3] lib/list_batch: A simple list insertion/deletion batching facility Waiman Long
  2016-01-26 16:03 ` [RFC PATCH 1/3] " Waiman Long
@ 2016-01-26 16:03 ` Waiman Long
  2016-01-26 21:44   ` Andi Kleen
  2016-01-26 16:03 ` [RFC PATCH 3/3] vfs: Enable list batching for the superblock's inode list Waiman Long
  2 siblings, 1 reply; 12+ messages in thread
From: Waiman Long @ 2016-01-26 16:03 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Alexander Viro
  Cc: linux-fsdevel, x86, linux-kernel, Peter Zijlstra, Scott J Norton,
	Douglas Hatch, Waiman Long

System with a large number of CPUs will benefit from the list batching
facility. This patch enables it for x86-64 as 32-bit i386 is not
likely to get much gain from it.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
---
 arch/x86/Kconfig |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 330e738..443e41d 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -42,6 +42,7 @@ config X86
 	select ARCH_SUPPORTS_NUMA_BALANCING	if X86_64
 	select ARCH_USE_BUILTIN_BSWAP
 	select ARCH_USE_CMPXCHG_LOCKREF		if X86_64
+	select ARCH_USE_LIST_BATCHING		if X86_64
 	select ARCH_USE_QUEUED_RWLOCKS
 	select ARCH_USE_QUEUED_SPINLOCKS
 	select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH if SMP
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH 3/3] vfs: Enable list batching for the superblock's inode list
  2016-01-26 16:03 [RFC PATCH 0/3] lib/list_batch: A simple list insertion/deletion batching facility Waiman Long
  2016-01-26 16:03 ` [RFC PATCH 1/3] " Waiman Long
  2016-01-26 16:03 ` [RFC PATCH 2/3] lib/list_batch, x86: Enable list insertion/deletion batching in x86-64 Waiman Long
@ 2016-01-26 16:03 ` Waiman Long
  2 siblings, 0 replies; 12+ messages in thread
From: Waiman Long @ 2016-01-26 16:03 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Alexander Viro
  Cc: linux-fsdevel, x86, linux-kernel, Peter Zijlstra, Scott J Norton,
	Douglas Hatch, Waiman Long

The inode_sb_list_add() and inode_sb_list_del() functions in the vfs
layer just perform list addition and deletion under lock. So they can
use the new list batching facility to speed up the list operations
when many CPUs are trying to do it simultaneously.

In particular, the inode_sb_list_del() function can be a performance
bottleneck when large applications with many threads and associated
inodes exit. With an exit microbenchmark that creates a large number
of threads, attachs many inodes to them and then exits. The runtimes
of that microbenchmark with 1000 threads before and after the patch
on a 4-socket Intel E7-4820 v3 system (48 cores, 96 threads) were
as follows:

  Kernel        Elapsed Time    System Time
  ------        ------------    -----------
  Vanilla 4.4      65.29s         82m14s
  Patched 4.4      45.69s         49m44s

The elapsed time and the reported system time were reduced by 30%
and 40% respectively.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
---
 fs/inode.c         |   13 +++++--------
 fs/super.c         |    1 +
 include/linux/fs.h |    2 ++
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/inode.c b/fs/inode.c
index 9f62db3..456bd8a 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -424,19 +424,16 @@ static void inode_lru_list_del(struct inode *inode)
  */
 void inode_sb_list_add(struct inode *inode)
 {
-	spin_lock(&inode->i_sb->s_inode_list_lock);
-	list_add(&inode->i_sb_list, &inode->i_sb->s_inodes);
-	spin_unlock(&inode->i_sb->s_inode_list_lock);
+	do_list_batch(lb_cmd_add, &inode->i_sb->s_inode_list_lock,
+			&inode->i_sb->s_list_batch, &inode->i_sb_list);
 }
 EXPORT_SYMBOL_GPL(inode_sb_list_add);
 
 static inline void inode_sb_list_del(struct inode *inode)
 {
-	if (!list_empty(&inode->i_sb_list)) {
-		spin_lock(&inode->i_sb->s_inode_list_lock);
-		list_del_init(&inode->i_sb_list);
-		spin_unlock(&inode->i_sb->s_inode_list_lock);
-	}
+	if (!list_empty(&inode->i_sb_list))
+		do_list_batch(lb_cmd_del_init, &inode->i_sb->s_inode_list_lock,
+				&inode->i_sb->s_list_batch, &inode->i_sb_list);
 }
 
 static unsigned long hash(struct super_block *sb, unsigned long hashval)
diff --git a/fs/super.c b/fs/super.c
index 1182af8..b0e8540 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -206,6 +206,7 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags)
 	mutex_init(&s->s_sync_lock);
 	INIT_LIST_HEAD(&s->s_inodes);
 	spin_lock_init(&s->s_inode_list_lock);
+	list_batch_init(&s->s_list_batch, &s->s_inodes);
 
 	if (list_lru_init_memcg(&s->s_dentry_lru))
 		goto fail;
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 1a20462..11d8b77 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -9,6 +9,7 @@
 #include <linux/stat.h>
 #include <linux/cache.h>
 #include <linux/list.h>
+#include <linux/list_batch.h>
 #include <linux/list_lru.h>
 #include <linux/llist.h>
 #include <linux/radix-tree.h>
@@ -1403,6 +1404,7 @@ struct super_block {
 	/* s_inode_list_lock protects s_inodes */
 	spinlock_t		s_inode_list_lock ____cacheline_aligned_in_smp;
 	struct list_head	s_inodes;	/* all inodes */
+	struct list_batch	s_list_batch;
 };
 
 extern struct timespec current_fs_time(struct super_block *sb);
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 2/3] lib/list_batch, x86: Enable list insertion/deletion batching in x86-64
  2016-01-26 16:03 ` [RFC PATCH 2/3] lib/list_batch, x86: Enable list insertion/deletion batching in x86-64 Waiman Long
@ 2016-01-26 21:44   ` Andi Kleen
  2016-01-27 16:38     ` Peter Zijlstra
  2016-01-27 20:34     ` Waiman Long
  0 siblings, 2 replies; 12+ messages in thread
From: Andi Kleen @ 2016-01-26 21:44 UTC (permalink / raw)
  To: Waiman Long
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Alexander Viro,
	linux-fsdevel, x86, linux-kernel, Peter Zijlstra, Scott J Norton,
	Douglas Hatch

Waiman Long <Waiman.Long@hpe.com> writes:
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 330e738..443e41d 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -42,6 +42,7 @@ config X86
>  	select ARCH_SUPPORTS_NUMA_BALANCING	if X86_64
>  	select ARCH_USE_BUILTIN_BSWAP
>  	select ARCH_USE_CMPXCHG_LOCKREF		if X86_64
> +	select ARCH_USE_LIST_BATCHING		if X86_64

I would make it unconditional. The code is simple enough
and shouldn't have drawbacks on smaller systems.

-Andi

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 1/3] lib/list_batch: A simple list insertion/deletion batching facility
  2016-01-26 16:03 ` [RFC PATCH 1/3] " Waiman Long
@ 2016-01-27 16:34   ` Peter Zijlstra
  2016-01-27 20:22     ` Waiman Long
  0 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2016-01-27 16:34 UTC (permalink / raw)
  To: Waiman Long
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Alexander Viro,
	linux-fsdevel, x86, linux-kernel, Scott J Norton, Douglas Hatch

On Tue, Jan 26, 2016 at 11:03:37AM -0500, Waiman Long wrote:
> +static __always_inline void _list_batch_cmd(enum list_batch_cmd cmd,
> +					    struct list_head *head,
> +					    struct list_head *entry)
> +{
> +	if (cmd == lb_cmd_add)
> +		list_add(entry, head);
> +	else if (cmd == lb_cmd_del)
> +		list_del(entry);
> +	else /* cmd == lb_cmd_del_init */
> +		list_del_init(entry);

Maybe use switch(), GCC has fancy warns with enums and switch().

> +}

> +static inline void do_list_batch(enum list_batch_cmd cmd, spinlock_t *lock,
> +				   struct list_batch *batch,
> +				   struct list_head *entry)
> +{
> +	/*
> +	 * Fast path
> +	 */
> +	if (spin_trylock(lock)) {
> +		_list_batch_cmd(cmd, batch->list, entry);
> +		spin_unlock(lock);

This is still quite a lot of code for an inline function

> +		return;
> +	}
> +	do_list_batch_slowpath(cmd, lock, batch, entry);
> +}



> +void do_list_batch_slowpath(enum list_batch_cmd cmd, spinlock_t *lock,
> +			    struct list_batch *batch, struct list_head *entry)
> +{
> +	struct list_batch_qnode node, *prev, *next, *nptr;
> +	int loop;
> +
> +	/*
> +	 * Put itself into the list_batch queue
> +	 */
> +	node.next  = NULL;
> +	node.entry = entry;
> +	node.cmd   = cmd;
> +	node.state = lb_state_waiting;
> +

Here we rely on the release barrier implied by xchg() to ensure the node
initialization is complete before the xchg() publishes the thing.

But do we also need the acquire part of this barrier? From what I could
tell, the primitive as a whole does not imply any ordering.

> +	prev = xchg(&batch->tail, &node);
> +
> +	if (prev) {
> +		WRITE_ONCE(prev->next, &node);
> +		while (READ_ONCE(node.state) == lb_state_waiting)
> +			cpu_relax();
> +		if (node.state == lb_state_done)
> +			return;
> +		WARN_ON(node.state != lb_state_batch);
> +	}
> +
> +	/*
> +	 * We are now the queue head, we shold now acquire the lock and
> +	 * process a batch of qnodes.
> +	 */
> +	loop = LB_BATCH_SIZE;

Have you tried different sizes?

> +	next = &node;
> +	spin_lock(lock);
> +
> +do_list_again:
> +	do {
> +		nptr = next;
> +		_list_batch_cmd(nptr->cmd, batch->list, nptr->entry);
> +		next = READ_ONCE(nptr->next);
> +		/*
> +		 * As soon as the state is marked lb_state_done, we
> +		 * can no longer assume the content of *nptr as valid.
> +		 * So we have to hold off marking it done until we no
> +		 * longer need its content.
> +		 *
> +		 * The release barrier here is to make sure that we
> +		 * won't access its content after marking it done.
> +		 */
> +		if (next)
> +			smp_store_release(&nptr->state, lb_state_done);
> +	} while (--loop && next);
> +	if (!next) {
> +		/*
> +		 * The queue tail should equal to nptr, so clear it to
> +		 * mark the queue as empty.
> +		 */
> +		if (cmpxchg(&batch->tail, nptr, NULL) != nptr) {
> +			/*
> +			 * Queue not empty, wait until the next pointer is
> +			 * initialized.
> +			 */
> +			while (!(next = READ_ONCE(nptr->next)))
> +				cpu_relax();
> +		}
> +		/* The above cmpxchg acts as a memory barrier */

for what? :-)

Also, if that cmpxchg() fails, it very much does _not_ act as one.

I suspect you want smp_store_release() setting the state_done, just as
above, and then use cmpxchg_relaxed().

> +		WRITE_ONCE(nptr->state, lb_state_done);
> +	}
> +	if (next) {
> +		if (loop)
> +			goto do_list_again;	/* More qnodes to process */
> +		/*
> +		 * Mark the next qnode as head to process the next batch
> +		 * of qnodes. The new queue head cannot proceed until we
> +		 * release the lock.
> +		 */
> +		WRITE_ONCE(next->state, lb_state_batch);
> +	}
> +	spin_unlock(lock);
> +}
> +EXPORT_SYMBOL_GPL(do_list_batch_slowpath);

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 2/3] lib/list_batch, x86: Enable list insertion/deletion batching in x86-64
  2016-01-26 21:44   ` Andi Kleen
@ 2016-01-27 16:38     ` Peter Zijlstra
  2016-01-27 20:34     ` Waiman Long
  1 sibling, 0 replies; 12+ messages in thread
From: Peter Zijlstra @ 2016-01-27 16:38 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Waiman Long, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Alexander Viro, linux-fsdevel, x86, linux-kernel, Scott J Norton,
	Douglas Hatch

On Tue, Jan 26, 2016 at 01:44:13PM -0800, Andi Kleen wrote:
> Waiman Long <Waiman.Long@hpe.com> writes:
> >
> > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > index 330e738..443e41d 100644
> > --- a/arch/x86/Kconfig
> > +++ b/arch/x86/Kconfig
> > @@ -42,6 +42,7 @@ config X86
> >  	select ARCH_SUPPORTS_NUMA_BALANCING	if X86_64
> >  	select ARCH_USE_BUILTIN_BSWAP
> >  	select ARCH_USE_CMPXCHG_LOCKREF		if X86_64
> > +	select ARCH_USE_LIST_BATCHING		if X86_64
> 
> I would make it unconditional. The code is simple enough
> and shouldn't have drawbacks on smaller systems.

I agree with the sentiment but would like to see a benchmark done on a
small system to verify all the same.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 1/3] lib/list_batch: A simple list insertion/deletion batching facility
  2016-01-27 16:34   ` Peter Zijlstra
@ 2016-01-27 20:22     ` Waiman Long
  2016-01-27 20:54       ` Peter Zijlstra
  0 siblings, 1 reply; 12+ messages in thread
From: Waiman Long @ 2016-01-27 20:22 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Alexander Viro,
	linux-fsdevel, x86, linux-kernel, Scott J Norton, Douglas Hatch

On 01/27/2016 11:34 AM, Peter Zijlstra wrote:
> On Tue, Jan 26, 2016 at 11:03:37AM -0500, Waiman Long wrote:
>> +static __always_inline void _list_batch_cmd(enum list_batch_cmd cmd,
>> +					    struct list_head *head,
>> +					    struct list_head *entry)
>> +{
>> +	if (cmd == lb_cmd_add)
>> +		list_add(entry, head);
>> +	else if (cmd == lb_cmd_del)
>> +		list_del(entry);
>> +	else /* cmd == lb_cmd_del_init */
>> +		list_del_init(entry);
> Maybe use switch(), GCC has fancy warns with enums and switch().

OK, I will look at the generated code to see if there is any difference.

>
>> +}
>> +static inline void do_list_batch(enum list_batch_cmd cmd, spinlock_t *lock,
>> +				   struct list_batch *batch,
>> +				   struct list_head *entry)
>> +{
>> +	/*
>> +	 * Fast path
>> +	 */
>> +	if (spin_trylock(lock)) {
>> +		_list_batch_cmd(cmd, batch->list, entry);
>> +		spin_unlock(lock);
>> _list_batch_cmd
> This is still quite a lot of code for an inline function

I expect the callers will call it with a constant cmd, thus optimizing 
out all the if conditional checks in _list_batch_cmd(). Taking the 
inline out will probably stop that optimization.

>> +		return;
>> +	}
>> +	do_list_batch_slowpath(cmd, lock, batch, entry);
>> +}
>
>
>> +void do_list_batch_slowpath(enum list_batch_cmd cmd, spinlock_t *lock,
>> +			    struct list_batch *batch, struct list_head *entry)
>> +{
>> +	struct list_batch_qnode node, *prev, *next, *nptr;
>> +	int loop;
>> +
>> +	/*
>> +	 * Put itself into the list_batch queue
>> +	 */
>> +	node.next  = NULL;
>> +	node.entry = entry;
>> +	node.cmd   = cmd;
>> +	node.state = lb_state_waiting;
>> +
> Here we rely on the release barrier implied by xchg() to ensure the node
> initialization is complete before the xchg() publishes the thing.
>
> But do we also need the acquire part of this barrier? From what I could
> tell, the primitive as a whole does not imply any ordering.

I think we probably won't need the acquire part, but I don't have a 
non-x86 machine that can really test out the more relaxed versions of 
the atomic ops. That is why I use the strict versions. We can always 
relax it later on with additional patches.

>
>> +	prev = xchg(&batch->tail,&node);
>> +
>> +	if (prev) {
>> +		WRITE_ONCE(prev->next,&node);
>> +		while (READ_ONCE(node.state) == lb_state_waiting)
>> +			cpu_relax();
>> +		if (node.state == lb_state_done)
>> +			return;
>> +		WARN_ON(node.state != lb_state_batch);
>> +	}
>> +
>> +	/*
>> +	 * We are now the queue head, we shold now acquire the lock and
>> +	 * process a batch of qnodes.
>> +	 */
>> +	loop = LB_BATCH_SIZE;
> Have you tried different sizes?

I have tried 64 and 128. Using 128 seems to give a bit better 
performance number.

>> +	next =&node;
>> +	spin_lock(lock);
>> +
>> +do_list_again:
>> +	do {
>> +		nptr = next;
>> +		_list_batch_cmd(nptr->cmd, batch->list, nptr->entry);
>> +		next = READ_ONCE(nptr->next);
>> +		/*
>> +		 * As soon as the state is marked lb_state_done, we
>> +		 * can no longer assume the content of *nptr as valid.
>> +		 * So we have to hold off marking it done until we no
>> +		 * longer need its content.
>> +		 *
>> +		 * The release barrier here is to make sure that we
>> +		 * won't access its content after marking it done.
>> +		 */
>> +		if (next)
>> +			smp_store_release(&nptr->state, lb_state_done);
>> +	} while (--loop&&  next);
>> +	if (!next) {
>> +		/*
>> +		 * The queue tail should equal to nptr, so clear it to
>> +		 * mark the queue as empty.
>> +		 */
>> +		if (cmpxchg(&batch->tail, nptr, NULL) != nptr) {
>> +			/*
>> +			 * Queue not empty, wait until the next pointer is
>> +			 * initialized.
>> +			 */
>> +			while (!(next = READ_ONCE(nptr->next)))
>> +				cpu_relax();
>> +		}
>> +		/* The above cmpxchg acts as a memory barrier */
> for what? :-)
>
> Also, if that cmpxchg() fails, it very much does _not_ act as one.
>
> I suspect you want smp_store_release() setting the state_done, just as
> above, and then use cmpxchg_relaxed().

You are right. I did forgot about there was no memory barrier guarantee 
when cmpxchg() fails. However, in that case, the READ_ONCE() and 
WRITE_ONCE() macros should still provide the necessary ordering, IMO. I 
can certainly change it to use cmpxchg_relaxed() and smp_store_release() 
instead.

>
>> +		WRITE_ONCE(nptr->state, lb_state_done);
>> +	}
>> +	if (next) {
>> +		if (loop)
>> +			goto do_list_again;	/* More qnodes to process */
>> +		/*
>> +		 * Mark the next qnode as head to process the next batch
>> +		 * of qnodes. The new queue head cannot proceed until we
>> +		 * release the lock.
>> +		 */
>> +		WRITE_ONCE(next->state, lb_state_batch);
>> +	}
>> +	spin_unlock(lock);
>> +}
>> +EXPORT_SYMBOL_GPL(do_list_batch_slowpath);

Cheers,
Longman

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 2/3] lib/list_batch, x86: Enable list insertion/deletion batching in x86-64
  2016-01-26 21:44   ` Andi Kleen
  2016-01-27 16:38     ` Peter Zijlstra
@ 2016-01-27 20:34     ` Waiman Long
  1 sibling, 0 replies; 12+ messages in thread
From: Waiman Long @ 2016-01-27 20:34 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Alexander Viro,
	linux-fsdevel, x86, linux-kernel, Peter Zijlstra, Scott J Norton,
	Douglas Hatch

On 01/26/2016 04:44 PM, Andi Kleen wrote:
> Waiman Long<Waiman.Long@hpe.com>  writes:
>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>> index 330e738..443e41d 100644
>> --- a/arch/x86/Kconfig
>> +++ b/arch/x86/Kconfig
>> @@ -42,6 +42,7 @@ config X86
>>   	select ARCH_SUPPORTS_NUMA_BALANCING	if X86_64
>>   	select ARCH_USE_BUILTIN_BSWAP
>>   	select ARCH_USE_CMPXCHG_LOCKREF		if X86_64
>> +	select ARCH_USE_LIST_BATCHING		if X86_64
> I would make it unconditional. The code is simple enough
> and shouldn't have drawbacks on smaller systems.
>
> -Andi

You are probably right. I will look into that.

Cheers,
Longman

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 1/3] lib/list_batch: A simple list insertion/deletion batching facility
  2016-01-27 20:22     ` Waiman Long
@ 2016-01-27 20:54       ` Peter Zijlstra
  2016-01-28 16:45         ` Waiman Long
  0 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2016-01-27 20:54 UTC (permalink / raw)
  To: Waiman Long
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Alexander Viro,
	linux-fsdevel, x86, linux-kernel, Scott J Norton, Douglas Hatch

On Wed, Jan 27, 2016 at 03:22:19PM -0500, Waiman Long wrote:

> >>+	/*
> >>+	 * Put itself into the list_batch queue
> >>+	 */
> >>+	node.next  = NULL;
> >>+	node.entry = entry;
> >>+	node.cmd   = cmd;
> >>+	node.state = lb_state_waiting;
> >>+
> >Here we rely on the release barrier implied by xchg() to ensure the node
> >initialization is complete before the xchg() publishes the thing.
> >
> >But do we also need the acquire part of this barrier? From what I could
> >tell, the primitive as a whole does not imply any ordering.
> 
> I think we probably won't need the acquire part, but I don't have a non-x86
> machine that can really test out the more relaxed versions of the atomic
> ops. That is why I use the strict versions. We can always relax it later on
> with additional patches.

Yeah, I have no hardware either. But at least we should comment the bits
we do know to rely upon.


> >>+	if (!next) {
> >>+		/*
> >>+		 * The queue tail should equal to nptr, so clear it to
> >>+		 * mark the queue as empty.
> >>+		 */
> >>+		if (cmpxchg(&batch->tail, nptr, NULL) != nptr) {
> >>+			/*
> >>+			 * Queue not empty, wait until the next pointer is
> >>+			 * initialized.
> >>+			 */
> >>+			while (!(next = READ_ONCE(nptr->next)))
> >>+				cpu_relax();
> >>+		}
> >>+		/* The above cmpxchg acts as a memory barrier */
> >for what? :-)
> >
> >Also, if that cmpxchg() fails, it very much does _not_ act as one.
> >
> >I suspect you want smp_store_release() setting the state_done, just as
> >above, and then use cmpxchg_relaxed().
> 
> You are right. I did forgot about there was no memory barrier guarantee when
> cmpxchg() fails. 

> However, in that case, the READ_ONCE() and WRITE_ONCE()
> macros should still provide the necessary ordering, IMO.

READ/WRITE_ONCE() provide _no_ order what so ever. And the issue here is
that we must not do any other stores to nptr after the state_done.

> I can certainly
> change it to use cmpxchg_relaxed() and smp_store_release() instead.

That seems a safe combination and would still generate the exact same
code on x86.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 1/3] lib/list_batch: A simple list insertion/deletion batching facility
  2016-01-27 20:54       ` Peter Zijlstra
@ 2016-01-28 16:45         ` Waiman Long
  2016-01-28 18:35           ` Peter Zijlstra
  0 siblings, 1 reply; 12+ messages in thread
From: Waiman Long @ 2016-01-28 16:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Alexander Viro,
	linux-fsdevel, x86, linux-kernel, Scott J Norton, Douglas Hatch

On 01/27/2016 03:54 PM, Peter Zijlstra wrote:
> On Wed, Jan 27, 2016 at 03:22:19PM -0500, Waiman Long wrote:
>
>>>> +	/*
>>>> +	 * Put itself into the list_batch queue
>>>> +	 */
>>>> +	node.next  = NULL;
>>>> +	node.entry = entry;
>>>> +	node.cmd   = cmd;
>>>> +	node.state = lb_state_waiting;
>>>> +
>>> Here we rely on the release barrier implied by xchg() to ensure the node
>>> initialization is complete before the xchg() publishes the thing.
>>>
>>> But do we also need the acquire part of this barrier? From what I could
>>> tell, the primitive as a whole does not imply any ordering.
>> I think we probably won't need the acquire part, but I don't have a non-x86
>> machine that can really test out the more relaxed versions of the atomic
>> ops. That is why I use the strict versions. We can always relax it later on
>> with additional patches.
> Yeah, I have no hardware either. But at least we should comment the bits
> we do know to rely upon.
>

Using xchg_release() looks OK to me. As this feature is enabled on x86 
only for this patch, we can make the change and whoever enabling it for 
other architectures that have a real release function will have to test it.

>>>> +	if (!next) {
>>>> +		/*
>>>> +		 * The queue tail should equal to nptr, so clear it to
>>>> +		 * mark the queue as empty.
>>>> +		 */
>>>> +		if (cmpxchg(&batch->tail, nptr, NULL) != nptr) {
>>>> +			/*
>>>> +			 * Queue not empty, wait until the next pointer is
>>>> +			 * initialized.
>>>> +			 */
>>>> +			while (!(next = READ_ONCE(nptr->next)))
>>>> +				cpu_relax();
>>>> +		}
>>>> +		/* The above cmpxchg acts as a memory barrier */
>>> for what? :-)
>>>
>>> Also, if that cmpxchg() fails, it very much does _not_ act as one.
>>>
>>> I suspect you want smp_store_release() setting the state_done, just as
>>> above, and then use cmpxchg_relaxed().
>> You are right. I did forgot about there was no memory barrier guarantee when
>> cmpxchg() fails.
>> However, in that case, the READ_ONCE() and WRITE_ONCE()
>> macros should still provide the necessary ordering, IMO.
> READ/WRITE_ONCE() provide _no_ order what so ever. And the issue here is
> that we must not do any other stores to nptr after the state_done.
>

I thought if those macros are accessing the same cacheline, the compiler 
won't change the ordering and the hardware will take care of the proper 
ordering. Anyway, I do intended to change to use smp_store_release() for 
safety.

>> I can certainly
>> change it to use cmpxchg_relaxed() and smp_store_release() instead.
> That seems a safe combination and would still generate the exact same
> code on x86.

Cheers,
Longman

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 1/3] lib/list_batch: A simple list insertion/deletion batching facility
  2016-01-28 16:45         ` Waiman Long
@ 2016-01-28 18:35           ` Peter Zijlstra
  0 siblings, 0 replies; 12+ messages in thread
From: Peter Zijlstra @ 2016-01-28 18:35 UTC (permalink / raw)
  To: Waiman Long
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Alexander Viro,
	linux-fsdevel, x86, linux-kernel, Scott J Norton, Douglas Hatch

On Thu, Jan 28, 2016 at 11:45:40AM -0500, Waiman Long wrote:
> Using xchg_release() looks OK to me. As this feature is enabled on x86 only
> for this patch, we can make the change and whoever enabling it for other
> architectures that have a real release function will have to test it.

Ah, I was more thinking about:

	/*
	 * We rely on the memory barrier implied by xchg() below to
	 * ensure the node initialization is complete before its
	 * published.
	 */

And then use xchg() like you already do.


> >READ/WRITE_ONCE() provide _no_ order what so ever. And the issue here is
> >that we must not do any other stores to nptr after the state_done.
> >
> 
> I thought if those macros are accessing the same cacheline, the compiler
> won't change the ordering and the hardware will take care of the proper
> ordering. Anyway, I do intended to change to use smp_store_release() for
> safety.

The macros use a volatile cast, and that ensures the compiler must emit
the store and must emit it as a single store. I'm not 100% sure on the
rules of the compiler reordering volatile accesses, they are not a
compiler barrier.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-01-28 18:35 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-01-26 16:03 [RFC PATCH 0/3] lib/list_batch: A simple list insertion/deletion batching facility Waiman Long
2016-01-26 16:03 ` [RFC PATCH 1/3] " Waiman Long
2016-01-27 16:34   ` Peter Zijlstra
2016-01-27 20:22     ` Waiman Long
2016-01-27 20:54       ` Peter Zijlstra
2016-01-28 16:45         ` Waiman Long
2016-01-28 18:35           ` Peter Zijlstra
2016-01-26 16:03 ` [RFC PATCH 2/3] lib/list_batch, x86: Enable list insertion/deletion batching in x86-64 Waiman Long
2016-01-26 21:44   ` Andi Kleen
2016-01-27 16:38     ` Peter Zijlstra
2016-01-27 20:34     ` Waiman Long
2016-01-26 16:03 ` [RFC PATCH 3/3] vfs: Enable list batching for the superblock's inode list Waiman Long

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).