From: Matthew Wilcox <willy@infradead.org>
To: linux-kernel@vger.kernel.org
Cc: Matthew Wilcox <mawilcox@microsoft.com>,
Ross Zwisler <ross.zwisler@linux.intel.com>,
David Howells <dhowells@redhat.com>, Shaohua Li <shli@kernel.org>,
Jens Axboe <axboe@kernel.dk>, Rehas Sachdeva <aquannie@gmail.com>,
Marc Zyngier <marc.zyngier@arm.com>,
linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
linux-f2fs-devel@lists.sourceforge.net,
linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-usb@vger.kernel.org,
linux-raid@vger.kernel.org
Subject: [PATCH v5 18/78] xarray: Add xas_next and xas_prev
Date: Fri, 15 Dec 2017 14:03:50 -0800 [thread overview]
Message-ID: <20171215220450.7899-19-willy@infradead.org> (raw)
In-Reply-To: <20171215220450.7899-1-willy@infradead.org>
From: Matthew Wilcox <mawilcox@microsoft.com>
These two functions move the xas index by one position, and adjust the
rest of the iterator state to match it. This is more efficient than
calling xas_set() as it keeps the iterator at the leaves of the tree
instead of walking the iterator from the root each time.
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
---
include/linux/xarray.h | 71 ++++++++++-
lib/xarray.c | 74 ++++++++++++
tools/testing/radix-tree/xarray-test.c | 213 +++++++++++++++++++++++++++++++++
3 files changed, 356 insertions(+), 2 deletions(-)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index fea81383a301..2d889208b68d 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -640,6 +640,12 @@ static inline bool xas_not_node(struct xa_node *node)
return ((unsigned long)node & 3) || !node;
}
+/* True if the node represents RESTART or an error */
+static inline bool xas_frozen(struct xa_node *node)
+{
+ return (unsigned long)node & 2;
+}
+
/* True if the node represents head-of-tree, RESTART or BOUNDS */
static inline bool xas_top(struct xa_node *node)
{
@@ -758,8 +764,8 @@ static inline bool xa_iter_skip(const void *entry)
}
/*
- * node->shift is always 0 for the inline iterators unless we're processing
- * a multi-index entry.
+ * node->shift is always 0 for next_entry and next_tag unless we're processing
+ * a multi-index entry. It can be non-0 for next/prev, so it's not used there.
*/
#ifdef CONFIG_RADIX_TREE_MULTIORDER
#define xa_node_shift(node) node->shift
@@ -767,6 +773,67 @@ static inline bool xa_iter_skip(const void *entry)
#define xa_node_shift(node) 0
#endif
+void *__xas_next(struct xa_state *);
+void *__xas_prev(struct xa_state *);
+
+/**
+ * xas_prev() - Move iterator to previous index.
+ * @xas: XArray operation state.
+ *
+ * If the @xas was in an error state, it will remain in an error state
+ * and this function will return %NULL. If the @xas has never been walked,
+ * it will have the effect of calling xas_load(). Otherwise one will be
+ * subtracted from the index and the state will be walked to the correct
+ * location in the array for the next operation.
+ *
+ * If the iterator was referencing index 0, this function wraps
+ * around to %ULONG_MAX.
+ *
+ * Return: The entry at the new index. This may be %NULL or an internal
+ * entry, although it should never be a node entry.
+ */
+static inline void *xas_prev(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_node;
+
+ if (unlikely(xas_not_node(node) || node->shift ||
+ xas->xa_offset == 0))
+ return __xas_prev(xas);
+
+ xas->xa_index--;
+ xas->xa_offset--;
+ return xa_entry(xas->xa, node, xas->xa_offset);
+}
+
+/**
+ * xas_next() - Move state to next index.
+ * @xas: XArray operation state.
+ *
+ * If the @xas was in an error state, it will remain in an error state
+ * and this function will return %NULL. If the @xas has never been walked,
+ * it will have the effect of calling xas_load(). Otherwise one will be
+ * added to the index and the state will be walked to the correct
+ * location in the array for the next operation.
+ *
+ * If the iterator was referencing index %ULONG_MAX, this function wraps
+ * around to 0.
+ *
+ * Return: The entry at the new index. This may be %NULL or an internal
+ * entry, although it should never be a node entry.
+ */
+static inline void *xas_next(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_node;
+
+ if (unlikely(xas_not_node(node) || node->shift ||
+ xas->xa_offset == XA_CHUNK_MASK))
+ return __xas_next(xas);
+
+ xas->xa_index++;
+ xas->xa_offset++;
+ return xa_entry(xas->xa, node, xas->xa_offset);
+}
+
/**
* xas_next_entry() - Advance iterator to next present entry.
* @xas: XArray operation state.
diff --git a/lib/xarray.c b/lib/xarray.c
index a51fa1e1f74f..a4975aeedf6b 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -838,6 +838,80 @@ void xas_pause(struct xa_state *xas)
}
EXPORT_SYMBOL_GPL(xas_pause);
+/*
+ * __xas_prev() - Find the previous entry in the XArray.
+ * @xas: XArray operation state.
+ *
+ * Helper function for xas_prev() which handles all the complex cases
+ * out of line.
+ */
+void *__xas_prev(struct xa_state *xas)
+{
+ void *entry;
+
+ if (!xas_frozen(xas->xa_node))
+ xas->xa_index--;
+ if (xas_not_node(xas->xa_node))
+ return xas_load(xas);
+
+ if (xas->xa_offset != get_offset(xas->xa_index, xas->xa_node))
+ xas->xa_offset--;
+
+ while (xas->xa_offset == 255) {
+ xas->xa_offset = xas->xa_node->offset - 1;
+ xas->xa_node = xa_parent(xas->xa, xas->xa_node);
+ if (!xas->xa_node)
+ return set_bounds(xas);
+ }
+
+ for (;;) {
+ entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
+ if (!xa_is_node(entry))
+ return entry;
+
+ xas->xa_node = xa_to_node(entry);
+ xas->xa_offset = get_offset(xas->xa_index, xas->xa_node);
+ }
+}
+EXPORT_SYMBOL_GPL(__xas_prev);
+
+/*
+ * __xas_next() - Find the next entry in the XArray.
+ * @xas: XArray operation state.
+ *
+ * Helper function for xas_next() which handles all the complex cases
+ * out of line.
+ */
+void *__xas_next(struct xa_state *xas)
+{
+ void *entry;
+
+ if (!xas_frozen(xas->xa_node))
+ xas->xa_index++;
+ if (xas_not_node(xas->xa_node))
+ return xas_load(xas);
+
+ if (xas->xa_offset != get_offset(xas->xa_index, xas->xa_node))
+ xas->xa_offset++;
+
+ while (xas->xa_offset == XA_CHUNK_SIZE) {
+ xas->xa_offset = xas->xa_node->offset + 1;
+ xas->xa_node = xa_parent(xas->xa, xas->xa_node);
+ if (!xas->xa_node)
+ return set_bounds(xas);
+ }
+
+ for (;;) {
+ entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
+ if (!xa_is_node(entry))
+ return entry;
+
+ xas->xa_node = xa_to_node(entry);
+ xas->xa_offset = get_offset(xas->xa_index, xas->xa_node);
+ }
+}
+EXPORT_SYMBOL_GPL(__xas_next);
+
/**
* xas_find() - Find the next present entry in the XArray.
* @xas: XArray operation state.
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
index 10de5d3d977a..43111786ebdd 100644
--- a/tools/testing/radix-tree/xarray-test.c
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -92,6 +92,104 @@ void check_xas_error(struct xarray *xa)
assert(xas.xa_node == XAS_BOUNDS);
}
+void check_xas_pause(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 0);
+ void *entry;
+ unsigned int seen;
+
+ xa_store(xa, 0, xa_mk_value(0), GFP_KERNEL);
+ xa_set_tag(xa, 0, XA_TAG_0);
+
+ seen = 0;
+ rcu_read_lock();
+ xas_for_each_tag(&xas, entry, ULONG_MAX, XA_TAG_0) {
+ if (!seen++) {
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ xa_set_tag(xa, 1, XA_TAG_0);
+ }
+ }
+ rcu_read_unlock();
+ /* We don't see an entry that was added after we started */
+ assert(seen == 1);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ xas_for_each_tag(&xas, entry, ULONG_MAX, XA_TAG_0) {
+ if (!seen++)
+ xa_erase(xa, 1);
+ }
+ rcu_read_unlock();
+ assert(seen == 1);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ if (!seen++)
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ }
+ rcu_read_unlock();
+ assert(seen == 1);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ if (!seen++)
+ xa_erase(xa, 1);
+ }
+ rcu_read_unlock();
+ assert(seen == 1);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ for (entry = xas_load(&xas); entry; entry = xas_next(&xas)) {
+ if (!seen++)
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ }
+ rcu_read_unlock();
+ assert(seen == 2);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ for (entry = xas_load(&xas); entry; entry = xas_next(&xas)) {
+ if (!seen++)
+ xa_erase(xa, 1);
+ }
+ rcu_read_unlock();
+ assert(seen == 1);
+
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ seen = 0;
+ xas_set(&xas, 0);
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ if (!seen++)
+ xas_pause(&xas);
+ }
+ assert(seen == 2);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ for (entry = xas_load(&xas); entry; entry = xas_next(&xas)) {
+ if (!seen++)
+ xas_pause(&xas);
+ }
+ assert(seen == 2);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ xa_set_tag(xa, 1, XA_TAG_0);
+ xas_for_each_tag(&xas, entry, ULONG_MAX, XA_TAG_0) {
+ if (!seen++)
+ xas_pause(&xas);
+ }
+ assert(seen == 2);
+}
+
void check_xas_retry(struct xarray *xa)
{
XA_STATE(xas, xa, 0);
@@ -249,9 +347,108 @@ void check_xas_delete(struct xarray *xa)
}
}
+void check_move_small(struct xarray *xa, unsigned long idx)
+{
+ XA_STATE(xas, xa, 0);
+ unsigned long i;
+
+ xa_store(xa, 0, xa_mk_value(0), GFP_KERNEL);
+ xa_store(xa, idx, xa_mk_value(idx), GFP_KERNEL);
+
+ for (i = 0; i < idx * 4; i++) {
+ void *entry = xas_next(&xas);
+ if (i <= idx)
+ assert(xas.xa_node != XAS_RESTART);
+ assert(xas.xa_index == i);
+ if (i == 0 || i == idx)
+ assert(entry == xa_mk_value(i));
+ else
+ assert(entry == NULL);
+ }
+ xas_next(&xas);
+ assert(xas.xa_index == i);
+
+ do {
+ void *entry = xas_prev(&xas);
+ i--;
+ if (i <= idx)
+ assert(xas.xa_node != XAS_RESTART);
+ assert(xas.xa_index == i);
+ if (i == 0 || i == idx)
+ assert(entry == xa_mk_value(i));
+ else
+ assert(entry == NULL);
+ } while (i > 0);
+
+ xas_set(&xas, ULONG_MAX);
+ assert(xas_next(&xas) == NULL);
+ assert(xas.xa_index == ULONG_MAX);
+ assert(xas_next(&xas) == xa_mk_value(0));
+ assert(xas.xa_index == 0);
+ assert(xas_prev(&xas) == NULL);
+ assert(xas.xa_index == ULONG_MAX);
+}
+
+void check_move(struct xarray *xa)
+{
+ XA_STATE(xas, xa, (1 << 16) - 1);
+ unsigned long i;
+
+ for (i = 0; i < (1 << 16); i++) {
+ xa_store(xa, i, xa_mk_value(i), GFP_KERNEL);
+ }
+
+ do {
+ void *entry = xas_prev(&xas);
+ i--;
+ assert(entry == xa_mk_value(i));
+ assert(i == xas.xa_index);
+ } while (i != 0);
+
+ assert(xas_prev(&xas) == NULL);
+ assert(xas.xa_index == ULONG_MAX);
+
+ do {
+ void *entry = xas_next(&xas);
+ assert(entry == xa_mk_value(i));
+ assert(i == xas.xa_index);
+ i++;
+ } while (i < (1 << 16));
+
+ for (i = (1 << 8); i < (1 << 15); i++) {
+ xa_erase(xa, i);
+ }
+
+ i = xas.xa_index;
+
+ do {
+ void *entry = xas_prev(&xas);
+ i--;
+ if ((i < (1 << 8)) || (i >= (1 << 15)))
+ assert(entry == xa_mk_value(i));
+ else
+ assert(entry == NULL);
+ assert(i == xas.xa_index);
+ } while (i != 0);
+
+ assert(xas_prev(&xas) == NULL);
+ assert(xas.xa_index == ULONG_MAX);
+
+ do {
+ void *entry = xas_next(&xas);
+ if ((i < (1 << 8)) || (i >= (1 << 15)))
+ assert(entry == xa_mk_value(i));
+ else
+ assert(entry == NULL);
+ assert(i == xas.xa_index);
+ i++;
+ } while (i < (1 << 16));
+}
+
void xarray_checks(void)
{
DEFINE_XARRAY(array);
+ unsigned long i;
check_xa_err(&array);
item_kill_tree(&array);
@@ -265,6 +462,9 @@ void xarray_checks(void)
check_xas_retry(&array);
item_kill_tree(&array);
+ check_xas_pause(&array);
+ item_kill_tree(&array);
+
check_xa_load(&array);
item_kill_tree(&array);
@@ -279,6 +479,19 @@ void xarray_checks(void)
check_xas_delete(&array);
item_kill_tree(&array);
+
+ for (i = 0; i < 16; i++) {
+ check_move_small(&array, 1UL << i);
+ item_kill_tree(&array);
+ }
+
+ for (i = 2; i < 16; i++) {
+ check_move_small(&array, (1UL << i) - 1);
+ item_kill_tree(&array);
+ }
+
+ check_move(&array);
+ item_kill_tree(&array);
}
int __weak main(void)
--
2.15.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-12-15 22:03 UTC|newest]
Thread overview: 95+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-15 22:03 [PATCH v5 00/78] XArray v5 Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 01/78] xfs: Rename xa_ elements to ail_ Matthew Wilcox
2018-01-03 1:01 ` Darrick J. Wong
2017-12-15 22:03 ` [PATCH v5 02/78] fscache: Use appropriate radix tree accessors Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 03/78] xarray: Add the xa_lock to the radix_tree_root Matthew Wilcox
2017-12-26 16:54 ` Kirill A. Shutemov
2017-12-27 3:43 ` Matthew Wilcox
2017-12-27 3:58 ` Matthew Wilcox
2017-12-27 10:18 ` Kirill A. Shutemov
2018-01-02 18:01 ` Darrick J. Wong
2018-01-02 22:41 ` Matthew Wilcox
2017-12-27 10:17 ` Kirill A. Shutemov
2017-12-15 22:03 ` [PATCH v5 04/78] page cache: Use xa_lock Matthew Wilcox
2017-12-26 16:56 ` Kirill A. Shutemov
2017-12-15 22:03 ` [PATCH v5 05/78] xarray: Replace exceptional entries Matthew Wilcox
2017-12-26 17:15 ` Kirill A. Shutemov
2017-12-27 3:05 ` Matthew Wilcox
2017-12-27 10:24 ` Kirill A. Shutemov
2017-12-15 22:03 ` [PATCH v5 06/78] xarray: Change definition of sibling entries Matthew Wilcox
2017-12-26 17:21 ` Kirill A. Shutemov
2017-12-27 3:13 ` Matthew Wilcox
2017-12-27 10:26 ` Kirill A. Shutemov
2017-12-15 22:03 ` [PATCH v5 07/78] xarray: Add definition of struct xarray Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 08/78] xarray: Define struct xa_node Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 09/78] xarray: Add documentation Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 10/78] xarray: Add xa_load Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 11/78] xarray: Add xa_get_tag, xa_set_tag and xa_clear_tag Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 12/78] xarray: Add xa_store Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 13/78] xarray: Add xa_cmpxchg Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 14/78] xarray: Add xa_for_each Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 15/78] xarray: Add xas_for_each_tag Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 16/78] xarray: Add xa_get_entries, xa_get_tagged and xa_get_maybe_tag Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 17/78] xarray: Add xa_destroy Matthew Wilcox
2017-12-15 22:03 ` Matthew Wilcox [this message]
2017-12-15 22:03 ` [PATCH v5 19/78] xarray: Add xas_create_range Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 20/78] xarray: Add MAINTAINERS entry Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 21/78] xarray: Add ability to store errno values Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 22/78] idr: Convert to XArray Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 23/78] ida: " Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 24/78] page cache: Convert hole search " Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 25/78] page cache: Add page_cache_range_empty function Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 26/78] page cache: Add and replace pages using the XArray Matthew Wilcox
2017-12-15 22:03 ` [PATCH v5 27/78] page cache: Convert page deletion to XArray Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 28/78] page cache: Convert page cache lookups " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 29/78] page cache: Convert delete_batch " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 30/78] page cache: Remove stray radix comment Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 31/78] mm: Convert page-writeback to XArray Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 32/78] mm: Convert workingset " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 33/78] mm: Convert truncate " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 34/78] mm: Convert add_to_swap_cache " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 35/78] mm: Convert delete_from_swap_cache " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 36/78] mm: Convert __do_page_cache_readahead " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 37/78] mm: Convert page migration " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 38/78] mm: Convert huge_memory " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 39/78] mm: Convert collapse_shmem " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 40/78] mm: Convert khugepaged_scan_shmem " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 41/78] pagevec: Use xa_tag_t Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 42/78] shmem: Convert replace to XArray Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 43/78] shmem: Convert shmem_confirm_swap " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 44/78] shmem: Convert find_swap_entry " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 45/78] shmem: Convert shmem_tag_pins " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 46/78] shmem: Convert shmem_wait_for_pins " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 47/78] shmem: Convert shmem_add_to_page_cache " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 48/78] shmem: Convert shmem_alloc_hugepage " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 49/78] shmem: Convert shmem_free_swap " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 50/78] shmem: Convert shmem_partial_swap_usage " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 51/78] shmem: Comment fixups Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 52/78] btrfs: Convert page cache to XArray Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 53/78] fs: Convert buffer " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 54/78] fs: Convert writeback " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 55/78] nilfs2: Convert " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 56/78] f2fs: " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 57/78] lustre: " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 58/78] dax: Convert dax_unlock_mapping_entry " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 59/78] dax: Convert lock_slot " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 60/78] dax: More XArray conversion Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 61/78] dax: Convert __dax_invalidate_mapping_entry to XArray Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 62/78] dax: Convert dax_writeback_one " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 63/78] dax: Convert dax_insert_pfn_mkwrite " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 64/78] dax: Convert dax_insert_mapping_entry " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 65/78] dax: Convert grab_mapping_entry " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 66/78] dax: Fix sparse warning Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 67/78] page cache: Finish XArray conversion Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 68/78] mm: Convert cgroup writeback to XArray Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 69/78] vmalloc: Convert " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 70/78] brd: " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 71/78] xfs: Convert m_perag_tree " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 72/78] xfs: Convert pag_ici_root " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 73/78] xfs: Convert xfs dquot " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 74/78] xfs: Convert mru cache " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 75/78] usb: Convert xhci-mem " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 76/78] md: Convert raid5-cache " Matthew Wilcox
2017-12-15 22:04 ` [PATCH v5 77/78] irqdomain: Convert " Matthew Wilcox
2017-12-16 10:51 ` Marc Zyngier
2017-12-15 22:04 ` [PATCH v5 78/78] fscache: " Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171215220450.7899-19-willy@infradead.org \
--to=willy@infradead.org \
--cc=aquannie@gmail.com \
--cc=axboe@kernel.dk \
--cc=dhowells@redhat.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-f2fs-devel@lists.sourceforge.net \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nilfs@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=linux-usb@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=marc.zyngier@arm.com \
--cc=mawilcox@microsoft.com \
--cc=ross.zwisler@linux.intel.com \
--cc=shli@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).