netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 net-next 0/6] af_unix: GC cleanup and optimisation
@ 2024-05-03 22:31 Kuniyuki Iwashima
  2024-05-03 22:31 ` [PATCH v1 net-next 1/6] af_unix: Add dead flag to struct scm_fp_list Kuniyuki Iwashima
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Kuniyuki Iwashima @ 2024-05-03 22:31 UTC (permalink / raw)
  To: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Kuniyuki Iwashima, Kuniyuki Iwashima, netdev

The first patch removes a small race introduced by commit 1af2dface5d2
("af_unix: Don't access successor in unix_del_edges() during GC.").

Other patches clean up GC and optimise it so that we no longer schedule
it if we know that no loop exists in the inflight graph.


Kuniyuki Iwashima (6):
  af_unix: Add dead flag to struct scm_fp_list.
  af_unix: Save the number of loops in inflight graph.
  af_unix: Manage inflight graph state as unix_graph_state.
  af_unix: Move wait_for_unix_gc() to unix_prepare_fpl().
  af_unix: Schedule GC based on graph state during sendmsg().
  af_unix: Schedule GC only if loop exists during close().

 include/net/af_unix.h |   4 +-
 include/net/scm.h     |   1 +
 net/core/scm.c        |   1 +
 net/unix/af_unix.c    |   7 +--
 net/unix/garbage.c    | 117 ++++++++++++++++++++++++++----------------
 5 files changed, 76 insertions(+), 54 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v1 net-next 1/6] af_unix: Add dead flag to struct scm_fp_list.
  2024-05-03 22:31 [PATCH v1 net-next 0/6] af_unix: GC cleanup and optimisation Kuniyuki Iwashima
@ 2024-05-03 22:31 ` Kuniyuki Iwashima
  2024-05-03 22:31 ` [PATCH v1 net-next 2/6] af_unix: Save the number of loops in inflight graph Kuniyuki Iwashima
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Kuniyuki Iwashima @ 2024-05-03 22:31 UTC (permalink / raw)
  To: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Kuniyuki Iwashima, Kuniyuki Iwashima, netdev

Commit 1af2dface5d2 ("af_unix: Don't access successor in unix_del_edges()
during GC.") fixed use-after-free by avoid accessing edge->successor while
GC is in progress.

However, there could be a small race window where another process could
call unix_del_edges() while gc_in_progress is true and __skb_queue_purge()
is on the way.

So, we cannot rely on gc_in_progress and need another marker per struct
scm_fp_list which indicates if the skb is garbage-collected.

This patch adds dead flag in struct scm_fp_list and set it true before
calling __skb_queue_purge() in __unix_gc().

Fixes: 1af2dface5d2 ("af_unix: Don't access successor in unix_del_edges() during GC.")
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
---
 include/net/scm.h  |  1 +
 net/core/scm.c     |  1 +
 net/unix/garbage.c | 14 ++++++++++----
 3 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/include/net/scm.h b/include/net/scm.h
index bbc5527809d1..0d35c7c77a74 100644
--- a/include/net/scm.h
+++ b/include/net/scm.h
@@ -33,6 +33,7 @@ struct scm_fp_list {
 	short			max;
 #ifdef CONFIG_UNIX
 	bool			inflight;
+	bool			dead;
 	struct list_head	vertices;
 	struct unix_edge	*edges;
 #endif
diff --git a/net/core/scm.c b/net/core/scm.c
index 5763f3320358..4f6a14babe5a 100644
--- a/net/core/scm.c
+++ b/net/core/scm.c
@@ -91,6 +91,7 @@ static int scm_fp_copy(struct cmsghdr *cmsg, struct scm_fp_list **fplp)
 		fpl->user = NULL;
 #if IS_ENABLED(CONFIG_UNIX)
 		fpl->inflight = false;
+		fpl->dead = false;
 		fpl->edges = NULL;
 		INIT_LIST_HEAD(&fpl->vertices);
 #endif
diff --git a/net/unix/garbage.c b/net/unix/garbage.c
index d76450133e4f..1f8b8cdfcdc8 100644
--- a/net/unix/garbage.c
+++ b/net/unix/garbage.c
@@ -158,13 +158,11 @@ static void unix_add_edge(struct scm_fp_list *fpl, struct unix_edge *edge)
 	unix_update_graph(unix_edge_successor(edge));
 }
 
-static bool gc_in_progress;
-
 static void unix_del_edge(struct scm_fp_list *fpl, struct unix_edge *edge)
 {
 	struct unix_vertex *vertex = edge->predecessor->vertex;
 
-	if (!gc_in_progress)
+	if (!fpl->dead)
 		unix_update_graph(unix_edge_successor(edge));
 
 	list_del(&edge->vertex_entry);
@@ -240,7 +238,7 @@ void unix_del_edges(struct scm_fp_list *fpl)
 		unix_del_edge(fpl, edge);
 	} while (i < fpl->count_unix);
 
-	if (!gc_in_progress) {
+	if (!fpl->dead) {
 		receiver = fpl->edges[0].successor;
 		receiver->scm_stat.nr_unix_fds -= fpl->count_unix;
 	}
@@ -559,9 +557,12 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
 	list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
 }
 
+static bool gc_in_progress;
+
 static void __unix_gc(struct work_struct *work)
 {
 	struct sk_buff_head hitlist;
+	struct sk_buff *skb;
 
 	spin_lock(&unix_gc_lock);
 
@@ -579,6 +580,11 @@ static void __unix_gc(struct work_struct *work)
 
 	spin_unlock(&unix_gc_lock);
 
+	skb_queue_walk(&hitlist, skb) {
+		if (UNIXCB(skb).fp)
+			UNIXCB(skb).fp->dead = true;
+	}
+
 	__skb_queue_purge(&hitlist);
 skip_gc:
 	WRITE_ONCE(gc_in_progress, false);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 net-next 2/6] af_unix: Save the number of loops in inflight graph.
  2024-05-03 22:31 [PATCH v1 net-next 0/6] af_unix: GC cleanup and optimisation Kuniyuki Iwashima
  2024-05-03 22:31 ` [PATCH v1 net-next 1/6] af_unix: Add dead flag to struct scm_fp_list Kuniyuki Iwashima
@ 2024-05-03 22:31 ` Kuniyuki Iwashima
  2024-05-07 13:54   ` Paolo Abeni
  2024-05-03 22:31 ` [PATCH v1 net-next 3/6] af_unix: Manage inflight graph state as unix_graph_state Kuniyuki Iwashima
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Kuniyuki Iwashima @ 2024-05-03 22:31 UTC (permalink / raw)
  To: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Kuniyuki Iwashima, Kuniyuki Iwashima, netdev

unix_walk_scc_fast() calls unix_scc_cyclic() for every SCC so that we
can make unix_graph_maybe_cyclic false when all SCC are cleaned up.

If we count the number of loops in the graph during Tarjan's algorithm,
we need not call unix_scc_cyclic() in unix_walk_scc_fast().

Instead, we can just decrement the number when calling unix_collect_skb()
and update unix_graph_maybe_cyclic based on the count.

Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
---
 net/unix/garbage.c | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/net/unix/garbage.c b/net/unix/garbage.c
index 1f8b8cdfcdc8..7ffb80dd422c 100644
--- a/net/unix/garbage.c
+++ b/net/unix/garbage.c
@@ -405,6 +405,7 @@ static bool unix_scc_cyclic(struct list_head *scc)
 
 static LIST_HEAD(unix_visited_vertices);
 static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2;
+static unsigned long unix_graph_circles;
 
 static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_index,
 			    struct sk_buff_head *hitlist)
@@ -494,8 +495,8 @@ static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_inde
 
 		if (scc_dead)
 			unix_collect_skb(&scc, hitlist);
-		else if (!unix_graph_maybe_cyclic)
-			unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
+		else if (unix_scc_cyclic(&scc))
+			unix_graph_circles++;
 
 		list_del(&scc);
 	}
@@ -509,7 +510,7 @@ static void unix_walk_scc(struct sk_buff_head *hitlist)
 {
 	unsigned long last_index = UNIX_VERTEX_INDEX_START;
 
-	unix_graph_maybe_cyclic = false;
+	unix_graph_circles = 0;
 
 	/* Visit every vertex exactly once.
 	 * __unix_walk_scc() moves visited vertices to unix_visited_vertices.
@@ -524,13 +525,12 @@ static void unix_walk_scc(struct sk_buff_head *hitlist)
 	list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
 	swap(unix_vertex_unvisited_index, unix_vertex_grouped_index);
 
+	unix_graph_maybe_cyclic = !!unix_graph_circles;
 	unix_graph_grouped = true;
 }
 
 static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
 {
-	unix_graph_maybe_cyclic = false;
-
 	while (!list_empty(&unix_unvisited_vertices)) {
 		struct unix_vertex *vertex;
 		struct list_head scc;
@@ -546,15 +546,18 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
 				scc_dead = unix_vertex_dead(vertex);
 		}
 
-		if (scc_dead)
+		if (scc_dead) {
 			unix_collect_skb(&scc, hitlist);
-		else if (!unix_graph_maybe_cyclic)
-			unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
+			unix_graph_circles--;
+		}
 
 		list_del(&scc);
 	}
 
 	list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
+
+	if (!unix_graph_circles)
+		unix_graph_maybe_cyclic = false;
 }
 
 static bool gc_in_progress;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 net-next 3/6] af_unix: Manage inflight graph state as unix_graph_state.
  2024-05-03 22:31 [PATCH v1 net-next 0/6] af_unix: GC cleanup and optimisation Kuniyuki Iwashima
  2024-05-03 22:31 ` [PATCH v1 net-next 1/6] af_unix: Add dead flag to struct scm_fp_list Kuniyuki Iwashima
  2024-05-03 22:31 ` [PATCH v1 net-next 2/6] af_unix: Save the number of loops in inflight graph Kuniyuki Iwashima
@ 2024-05-03 22:31 ` Kuniyuki Iwashima
  2024-05-03 22:31 ` [PATCH v1 net-next 4/6] af_unix: Move wait_for_unix_gc() to unix_prepare_fpl() Kuniyuki Iwashima
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Kuniyuki Iwashima @ 2024-05-03 22:31 UTC (permalink / raw)
  To: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Kuniyuki Iwashima, Kuniyuki Iwashima, netdev

The graph state is managed by two variables, unix_graph_maybe_cyclic
and unix_graph_grouped.

However, unix_graph_grouped is checked only when unix_graph_maybe_cyclic
is true, so the graph state is actually tri-state.

Let's merge the two variables into unix_graph_state.

Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
---
 net/unix/garbage.c | 21 ++++++++++++---------
 1 file changed, 12 insertions(+), 9 deletions(-)

diff --git a/net/unix/garbage.c b/net/unix/garbage.c
index 7ffb80dd422c..478b2eb479a2 100644
--- a/net/unix/garbage.c
+++ b/net/unix/garbage.c
@@ -112,8 +112,13 @@ static struct unix_vertex *unix_edge_successor(struct unix_edge *edge)
 	return edge->successor->vertex;
 }
 
-static bool unix_graph_maybe_cyclic;
-static bool unix_graph_grouped;
+enum {
+	UNIX_GRAPH_NOT_CYCLIC,
+	UNIX_GRAPH_MAYBE_CYCLIC,
+	UNIX_GRAPH_CYCLIC,
+};
+
+static unsigned char unix_graph_state;
 
 static void unix_update_graph(struct unix_vertex *vertex)
 {
@@ -123,8 +128,7 @@ static void unix_update_graph(struct unix_vertex *vertex)
 	if (!vertex)
 		return;
 
-	unix_graph_maybe_cyclic = true;
-	unix_graph_grouped = false;
+	unix_graph_state = UNIX_GRAPH_MAYBE_CYCLIC;
 }
 
 static LIST_HEAD(unix_unvisited_vertices);
@@ -525,8 +529,7 @@ static void unix_walk_scc(struct sk_buff_head *hitlist)
 	list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
 	swap(unix_vertex_unvisited_index, unix_vertex_grouped_index);
 
-	unix_graph_maybe_cyclic = !!unix_graph_circles;
-	unix_graph_grouped = true;
+	unix_graph_state = unix_graph_circles ? UNIX_GRAPH_CYCLIC : UNIX_GRAPH_NOT_CYCLIC;
 }
 
 static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
@@ -557,7 +560,7 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
 	list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
 
 	if (!unix_graph_circles)
-		unix_graph_maybe_cyclic = false;
+		unix_graph_state = UNIX_GRAPH_NOT_CYCLIC;
 }
 
 static bool gc_in_progress;
@@ -569,14 +572,14 @@ static void __unix_gc(struct work_struct *work)
 
 	spin_lock(&unix_gc_lock);
 
-	if (!unix_graph_maybe_cyclic) {
+	if (unix_graph_state == UNIX_GRAPH_NOT_CYCLIC) {
 		spin_unlock(&unix_gc_lock);
 		goto skip_gc;
 	}
 
 	__skb_queue_head_init(&hitlist);
 
-	if (unix_graph_grouped)
+	if (unix_graph_state == UNIX_GRAPH_CYCLIC)
 		unix_walk_scc_fast(&hitlist);
 	else
 		unix_walk_scc(&hitlist);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 net-next 4/6] af_unix: Move wait_for_unix_gc() to unix_prepare_fpl().
  2024-05-03 22:31 [PATCH v1 net-next 0/6] af_unix: GC cleanup and optimisation Kuniyuki Iwashima
                   ` (2 preceding siblings ...)
  2024-05-03 22:31 ` [PATCH v1 net-next 3/6] af_unix: Manage inflight graph state as unix_graph_state Kuniyuki Iwashima
@ 2024-05-03 22:31 ` Kuniyuki Iwashima
  2024-05-03 22:31 ` [PATCH v1 net-next 5/6] af_unix: Schedule GC based on graph state during sendmsg() Kuniyuki Iwashima
  2024-05-03 22:31 ` [PATCH v1 net-next 6/6] af_unix: Schedule GC only if loop exists during close() Kuniyuki Iwashima
  5 siblings, 0 replies; 11+ messages in thread
From: Kuniyuki Iwashima @ 2024-05-03 22:31 UTC (permalink / raw)
  To: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Kuniyuki Iwashima, Kuniyuki Iwashima, netdev

unix_(dgram|stream)_sendmsg() call wait_for_unix_gc() to trigger GC
when the number of inflight AF_UNIX sockets is insane.

This does not happen in the sane use case.  If this happened, the
insane process would continue sending FDs.

We need not impose the duty in the normal sendmsg(), and instead,
we can trigger GC in unix_prepare_fpl(), which is called when a fd
of AF_UNIX socket is passed.

Also, this renames wait_for_unix_gc() to __unix_schedule_gc() for the
following changes.

Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
---
 include/net/af_unix.h | 1 -
 net/unix/af_unix.c    | 4 ----
 net/unix/garbage.c    | 9 ++++++---
 3 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/include/net/af_unix.h b/include/net/af_unix.h
index b6eedf7650da..ebd1b3ca8906 100644
--- a/include/net/af_unix.h
+++ b/include/net/af_unix.h
@@ -24,7 +24,6 @@ void unix_update_edges(struct unix_sock *receiver);
 int unix_prepare_fpl(struct scm_fp_list *fpl);
 void unix_destroy_fpl(struct scm_fp_list *fpl);
 void unix_gc(void);
-void wait_for_unix_gc(struct scm_fp_list *fpl);
 
 struct unix_vertex {
 	struct list_head edges;
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index dc1651541723..863058be35f3 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1925,8 +1925,6 @@ static int unix_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
 	if (err < 0)
 		return err;
 
-	wait_for_unix_gc(scm.fp);
-
 	err = -EOPNOTSUPP;
 	if (msg->msg_flags&MSG_OOB)
 		goto out;
@@ -2202,8 +2200,6 @@ static int unix_stream_sendmsg(struct socket *sock, struct msghdr *msg,
 	if (err < 0)
 		return err;
 
-	wait_for_unix_gc(scm.fp);
-
 	err = -EOPNOTSUPP;
 	if (msg->msg_flags & MSG_OOB) {
 #if IS_ENABLED(CONFIG_AF_UNIX_OOB)
diff --git a/net/unix/garbage.c b/net/unix/garbage.c
index 478b2eb479a2..85c0500764d4 100644
--- a/net/unix/garbage.c
+++ b/net/unix/garbage.c
@@ -271,6 +271,8 @@ void unix_update_edges(struct unix_sock *receiver)
 	}
 }
 
+static void __unix_schedule_gc(struct scm_fp_list *fpl);
+
 int unix_prepare_fpl(struct scm_fp_list *fpl)
 {
 	struct unix_vertex *vertex;
@@ -292,6 +294,8 @@ int unix_prepare_fpl(struct scm_fp_list *fpl)
 	if (!fpl->edges)
 		goto err;
 
+	__unix_schedule_gc(fpl);
+
 	return 0;
 
 err:
@@ -607,7 +611,7 @@ void unix_gc(void)
 #define UNIX_INFLIGHT_TRIGGER_GC 16000
 #define UNIX_INFLIGHT_SANE_USER (SCM_MAX_FD * 8)
 
-void wait_for_unix_gc(struct scm_fp_list *fpl)
+static void __unix_schedule_gc(struct scm_fp_list *fpl)
 {
 	/* If number of inflight sockets is insane,
 	 * force a garbage collect right now.
@@ -622,8 +626,7 @@ void wait_for_unix_gc(struct scm_fp_list *fpl)
 	/* Penalise users who want to send AF_UNIX sockets
 	 * but whose sockets have not been received yet.
 	 */
-	if (!fpl || !fpl->count_unix ||
-	    READ_ONCE(fpl->user->unix_inflight) < UNIX_INFLIGHT_SANE_USER)
+	if (READ_ONCE(fpl->user->unix_inflight) < UNIX_INFLIGHT_SANE_USER)
 		return;
 
 	if (READ_ONCE(gc_in_progress))
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 net-next 5/6] af_unix: Schedule GC based on graph state during sendmsg().
  2024-05-03 22:31 [PATCH v1 net-next 0/6] af_unix: GC cleanup and optimisation Kuniyuki Iwashima
                   ` (3 preceding siblings ...)
  2024-05-03 22:31 ` [PATCH v1 net-next 4/6] af_unix: Move wait_for_unix_gc() to unix_prepare_fpl() Kuniyuki Iwashima
@ 2024-05-03 22:31 ` Kuniyuki Iwashima
  2024-05-03 22:31 ` [PATCH v1 net-next 6/6] af_unix: Schedule GC only if loop exists during close() Kuniyuki Iwashima
  5 siblings, 0 replies; 11+ messages in thread
From: Kuniyuki Iwashima @ 2024-05-03 22:31 UTC (permalink / raw)
  To: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Kuniyuki Iwashima, Kuniyuki Iwashima, netdev

The conventional test to trigger GC was based on the number of inflight
sockets.

Now we have more reliable data indicating if the loop exists in the graph.

When the graph state is

  1. UNIX_GRAPH_NOT_CYCLIC, do not scheudle GC
  2. UNIX_GRAPH_MAYBE_CYCLIC, schedule GC if unix_tot_inflight > 16384
  3. UNIX_GRAPH_CYCLIC, schedule GC if unix_graph_circles > 1024

1024 might sound much smaller than 16384, but if the number of loops
is larger than 1024, there must be something wrong.

Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
---
 net/unix/garbage.c | 44 ++++++++++++++++++++++++++++----------------
 1 file changed, 28 insertions(+), 16 deletions(-)

diff --git a/net/unix/garbage.c b/net/unix/garbage.c
index 85c0500764d4..48cea3cf4a42 100644
--- a/net/unix/garbage.c
+++ b/net/unix/garbage.c
@@ -128,7 +128,7 @@ static void unix_update_graph(struct unix_vertex *vertex)
 	if (!vertex)
 		return;
 
-	unix_graph_state = UNIX_GRAPH_MAYBE_CYCLIC;
+	WRITE_ONCE(unix_graph_state, UNIX_GRAPH_MAYBE_CYCLIC);
 }
 
 static LIST_HEAD(unix_unvisited_vertices);
@@ -533,7 +533,8 @@ static void unix_walk_scc(struct sk_buff_head *hitlist)
 	list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
 	swap(unix_vertex_unvisited_index, unix_vertex_grouped_index);
 
-	unix_graph_state = unix_graph_circles ? UNIX_GRAPH_CYCLIC : UNIX_GRAPH_NOT_CYCLIC;
+	WRITE_ONCE(unix_graph_state,
+		   unix_graph_circles ? UNIX_GRAPH_CYCLIC : UNIX_GRAPH_NOT_CYCLIC);
 }
 
 static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
@@ -555,7 +556,7 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
 
 		if (scc_dead) {
 			unix_collect_skb(&scc, hitlist);
-			unix_graph_circles--;
+			WRITE_ONCE(unix_graph_circles, unix_graph_circles - 1);
 		}
 
 		list_del(&scc);
@@ -564,7 +565,7 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
 	list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
 
 	if (!unix_graph_circles)
-		unix_graph_state = UNIX_GRAPH_NOT_CYCLIC;
+		WRITE_ONCE(unix_graph_state, UNIX_GRAPH_NOT_CYCLIC);
 }
 
 static bool gc_in_progress;
@@ -608,27 +609,38 @@ void unix_gc(void)
 	queue_work(system_unbound_wq, &unix_gc_work);
 }
 
-#define UNIX_INFLIGHT_TRIGGER_GC 16000
+#define UNIX_INFLIGHT_SANE_CIRCLES (1 << 10)
+#define UNIX_INFLIGHT_SANE_SOCKETS (1 << 14)
 #define UNIX_INFLIGHT_SANE_USER (SCM_MAX_FD * 8)
 
 static void __unix_schedule_gc(struct scm_fp_list *fpl)
 {
-	/* If number of inflight sockets is insane,
-	 * force a garbage collect right now.
-	 *
-	 * Paired with the WRITE_ONCE() in unix_inflight(),
-	 * unix_notinflight(), and __unix_gc().
+	unsigned char graph_state = READ_ONCE(unix_graph_state);
+	bool wait = false;
+
+	if (graph_state == UNIX_GRAPH_NOT_CYCLIC)
+		return;
+
+	/* If the number of inflight sockets or cyclic references
+	 * is insane, schedule garbage collector if not running.
 	 */
-	if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC &&
-	    !READ_ONCE(gc_in_progress))
-		unix_gc();
+	if (graph_state == UNIX_GRAPH_CYCLIC) {
+		if (READ_ONCE(unix_graph_circles) < UNIX_INFLIGHT_SANE_CIRCLES)
+			return;
+	} else {
+		if (READ_ONCE(unix_tot_inflight) < UNIX_INFLIGHT_SANE_SOCKETS)
+			return;
+	}
 
 	/* Penalise users who want to send AF_UNIX sockets
 	 * but whose sockets have not been received yet.
 	 */
-	if (READ_ONCE(fpl->user->unix_inflight) < UNIX_INFLIGHT_SANE_USER)
-		return;
+	if (READ_ONCE(fpl->user->unix_inflight) > UNIX_INFLIGHT_SANE_USER)
+		wait = true;
+
+	if (!READ_ONCE(gc_in_progress))
+		unix_gc();
 
-	if (READ_ONCE(gc_in_progress))
+	if (wait)
 		flush_work(&unix_gc_work);
 }
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 net-next 6/6] af_unix: Schedule GC only if loop exists during close().
  2024-05-03 22:31 [PATCH v1 net-next 0/6] af_unix: GC cleanup and optimisation Kuniyuki Iwashima
                   ` (4 preceding siblings ...)
  2024-05-03 22:31 ` [PATCH v1 net-next 5/6] af_unix: Schedule GC based on graph state during sendmsg() Kuniyuki Iwashima
@ 2024-05-03 22:31 ` Kuniyuki Iwashima
  5 siblings, 0 replies; 11+ messages in thread
From: Kuniyuki Iwashima @ 2024-05-03 22:31 UTC (permalink / raw)
  To: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Kuniyuki Iwashima, Kuniyuki Iwashima, netdev

If unix_tot_inflight is not 0 when AF_UNIX socket is close()d, GC is
always scheduled.

However, we need not do so if we know no loop exists in the inflight
graph.

Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
---
 include/net/af_unix.h |  3 +--
 net/unix/af_unix.c    |  3 +--
 net/unix/garbage.c    | 30 +++++++++++++++---------------
 3 files changed, 17 insertions(+), 19 deletions(-)

diff --git a/include/net/af_unix.h b/include/net/af_unix.h
index ebd1b3ca8906..1270b2c08b8f 100644
--- a/include/net/af_unix.h
+++ b/include/net/af_unix.h
@@ -17,13 +17,12 @@ static inline struct unix_sock *unix_get_socket(struct file *filp)
 }
 #endif
 
-extern unsigned int unix_tot_inflight;
 void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver);
 void unix_del_edges(struct scm_fp_list *fpl);
 void unix_update_edges(struct unix_sock *receiver);
 int unix_prepare_fpl(struct scm_fp_list *fpl);
 void unix_destroy_fpl(struct scm_fp_list *fpl);
-void unix_gc(void);
+void unix_schedule_gc(void);
 
 struct unix_vertex {
 	struct list_head edges;
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index 863058be35f3..b99f7170835e 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -677,8 +677,7 @@ static void unix_release_sock(struct sock *sk, int embrion)
 	 *	  What the above comment does talk about? --ANK(980817)
 	 */
 
-	if (READ_ONCE(unix_tot_inflight))
-		unix_gc();		/* Garbage collect fds */
+	unix_schedule_gc();
 }
 
 static void init_peercred(struct sock *sk)
diff --git a/net/unix/garbage.c b/net/unix/garbage.c
index 48cea3cf4a42..2cecbb97882c 100644
--- a/net/unix/garbage.c
+++ b/net/unix/garbage.c
@@ -189,7 +189,7 @@ static void unix_free_vertices(struct scm_fp_list *fpl)
 }
 
 static DEFINE_SPINLOCK(unix_gc_lock);
-unsigned int unix_tot_inflight;
+static unsigned int unix_tot_inflight;
 
 void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver)
 {
@@ -577,11 +577,6 @@ static void __unix_gc(struct work_struct *work)
 
 	spin_lock(&unix_gc_lock);
 
-	if (unix_graph_state == UNIX_GRAPH_NOT_CYCLIC) {
-		spin_unlock(&unix_gc_lock);
-		goto skip_gc;
-	}
-
 	__skb_queue_head_init(&hitlist);
 
 	if (unix_graph_state == UNIX_GRAPH_CYCLIC)
@@ -597,18 +592,12 @@ static void __unix_gc(struct work_struct *work)
 	}
 
 	__skb_queue_purge(&hitlist);
-skip_gc:
+
 	WRITE_ONCE(gc_in_progress, false);
 }
 
 static DECLARE_WORK(unix_gc_work, __unix_gc);
 
-void unix_gc(void)
-{
-	WRITE_ONCE(gc_in_progress, true);
-	queue_work(system_unbound_wq, &unix_gc_work);
-}
-
 #define UNIX_INFLIGHT_SANE_CIRCLES (1 << 10)
 #define UNIX_INFLIGHT_SANE_SOCKETS (1 << 14)
 #define UNIX_INFLIGHT_SANE_USER (SCM_MAX_FD * 8)
@@ -621,6 +610,9 @@ static void __unix_schedule_gc(struct scm_fp_list *fpl)
 	if (graph_state == UNIX_GRAPH_NOT_CYCLIC)
 		return;
 
+	if (!fpl)
+		goto schedule;
+
 	/* If the number of inflight sockets or cyclic references
 	 * is insane, schedule garbage collector if not running.
 	 */
@@ -638,9 +630,17 @@ static void __unix_schedule_gc(struct scm_fp_list *fpl)
 	if (READ_ONCE(fpl->user->unix_inflight) > UNIX_INFLIGHT_SANE_USER)
 		wait = true;
 
-	if (!READ_ONCE(gc_in_progress))
-		unix_gc();
+schedule:
+	if (!READ_ONCE(gc_in_progress)) {
+		WRITE_ONCE(gc_in_progress, true);
+		queue_work(system_unbound_wq, &unix_gc_work);
+	}
 
 	if (wait)
 		flush_work(&unix_gc_work);
 }
+
+void unix_schedule_gc(void)
+{
+	__unix_schedule_gc(NULL);
+}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 net-next 2/6] af_unix: Save the number of loops in inflight graph.
  2024-05-03 22:31 ` [PATCH v1 net-next 2/6] af_unix: Save the number of loops in inflight graph Kuniyuki Iwashima
@ 2024-05-07 13:54   ` Paolo Abeni
  2024-05-07 16:11     ` Kuniyuki Iwashima
  0 siblings, 1 reply; 11+ messages in thread
From: Paolo Abeni @ 2024-05-07 13:54 UTC (permalink / raw)
  To: Kuniyuki Iwashima, David S. Miller, Eric Dumazet, Jakub Kicinski
  Cc: Kuniyuki Iwashima, netdev

On Fri, 2024-05-03 at 15:31 -0700, Kuniyuki Iwashima wrote:
> unix_walk_scc_fast() calls unix_scc_cyclic() for every SCC so that we
> can make unix_graph_maybe_cyclic false when all SCC are cleaned up.
> 
> If we count the number of loops in the graph during Tarjan's algorithm,
> we need not call unix_scc_cyclic() in unix_walk_scc_fast().
> 
> Instead, we can just decrement the number when calling unix_collect_skb()
> and update unix_graph_maybe_cyclic based on the count.
> 
> Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
> ---
>  net/unix/garbage.c | 19 +++++++++++--------
>  1 file changed, 11 insertions(+), 8 deletions(-)
> 
> diff --git a/net/unix/garbage.c b/net/unix/garbage.c
> index 1f8b8cdfcdc8..7ffb80dd422c 100644
> --- a/net/unix/garbage.c
> +++ b/net/unix/garbage.c
> @@ -405,6 +405,7 @@ static bool unix_scc_cyclic(struct list_head *scc)
>  
>  static LIST_HEAD(unix_visited_vertices);
>  static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2;
> +static unsigned long unix_graph_circles;
>  
>  static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_index,
>  			    struct sk_buff_head *hitlist)
> @@ -494,8 +495,8 @@ static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_inde
>  
>  		if (scc_dead)
>  			unix_collect_skb(&scc, hitlist);
> -		else if (!unix_graph_maybe_cyclic)
> -			unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
> +		else if (unix_scc_cyclic(&scc))
> +			unix_graph_circles++;
>  
>  		list_del(&scc);
>  	}
> @@ -509,7 +510,7 @@ static void unix_walk_scc(struct sk_buff_head *hitlist)
>  {
>  	unsigned long last_index = UNIX_VERTEX_INDEX_START;
>  
> -	unix_graph_maybe_cyclic = false;
> +	unix_graph_circles = 0;
>  
>  	/* Visit every vertex exactly once.
>  	 * __unix_walk_scc() moves visited vertices to unix_visited_vertices.
> @@ -524,13 +525,12 @@ static void unix_walk_scc(struct sk_buff_head *hitlist)
>  	list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
>  	swap(unix_vertex_unvisited_index, unix_vertex_grouped_index);
>  
> +	unix_graph_maybe_cyclic = !!unix_graph_circles;
>  	unix_graph_grouped = true;
>  }
>  
>  static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
>  {
> -	unix_graph_maybe_cyclic = false;
> -
>  	while (!list_empty(&unix_unvisited_vertices)) {
>  		struct unix_vertex *vertex;
>  		struct list_head scc;
> @@ -546,15 +546,18 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
>  				scc_dead = unix_vertex_dead(vertex);
>  		}
>  
> -		if (scc_dead)
> +		if (scc_dead) {
>  			unix_collect_skb(&scc, hitlist);
> -		else if (!unix_graph_maybe_cyclic)
> -			unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
> +			unix_graph_circles--;

Possibly WARN_ON_ONCE(unix_graph_circles < 0) ?

I find this patch a little scaring - meaning I can't understand it
fully, I'm wondering if it would make any sense to postpone this patch
to the next cycle?

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 net-next 2/6] af_unix: Save the number of loops in inflight graph.
  2024-05-07 13:54   ` Paolo Abeni
@ 2024-05-07 16:11     ` Kuniyuki Iwashima
  2024-05-08 10:08       ` Paolo Abeni
  0 siblings, 1 reply; 11+ messages in thread
From: Kuniyuki Iwashima @ 2024-05-07 16:11 UTC (permalink / raw)
  To: pabeni; +Cc: davem, edumazet, kuba, kuni1840, kuniyu, netdev

From: Paolo Abeni <pabeni@redhat.com>
Date: Tue, 07 May 2024 15:54:33 +0200
> On Fri, 2024-05-03 at 15:31 -0700, Kuniyuki Iwashima wrote:
> > unix_walk_scc_fast() calls unix_scc_cyclic() for every SCC so that we
> > can make unix_graph_maybe_cyclic false when all SCC are cleaned up.
> > 
> > If we count the number of loops in the graph during Tarjan's algorithm,
> > we need not call unix_scc_cyclic() in unix_walk_scc_fast().
> > 
> > Instead, we can just decrement the number when calling unix_collect_skb()
> > and update unix_graph_maybe_cyclic based on the count.
> > 
> > Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
> > ---
> >  net/unix/garbage.c | 19 +++++++++++--------
> >  1 file changed, 11 insertions(+), 8 deletions(-)
> > 
> > diff --git a/net/unix/garbage.c b/net/unix/garbage.c
> > index 1f8b8cdfcdc8..7ffb80dd422c 100644
> > --- a/net/unix/garbage.c
> > +++ b/net/unix/garbage.c
> > @@ -405,6 +405,7 @@ static bool unix_scc_cyclic(struct list_head *scc)
> >  
> >  static LIST_HEAD(unix_visited_vertices);
> >  static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2;
> > +static unsigned long unix_graph_circles;
> >  
> >  static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_index,
> >  			    struct sk_buff_head *hitlist)
> > @@ -494,8 +495,8 @@ static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_inde
> >  
> >  		if (scc_dead)
> >  			unix_collect_skb(&scc, hitlist);
> > -		else if (!unix_graph_maybe_cyclic)
> > -			unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
> > +		else if (unix_scc_cyclic(&scc))
> > +			unix_graph_circles++;
> >  
> >  		list_del(&scc);
> >  	}
> > @@ -509,7 +510,7 @@ static void unix_walk_scc(struct sk_buff_head *hitlist)
> >  {
> >  	unsigned long last_index = UNIX_VERTEX_INDEX_START;
> >  
> > -	unix_graph_maybe_cyclic = false;
> > +	unix_graph_circles = 0;
> >  
> >  	/* Visit every vertex exactly once.
> >  	 * __unix_walk_scc() moves visited vertices to unix_visited_vertices.
> > @@ -524,13 +525,12 @@ static void unix_walk_scc(struct sk_buff_head *hitlist)
> >  	list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
> >  	swap(unix_vertex_unvisited_index, unix_vertex_grouped_index);
> >  
> > +	unix_graph_maybe_cyclic = !!unix_graph_circles;
> >  	unix_graph_grouped = true;
> >  }
> >  
> >  static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
> >  {
> > -	unix_graph_maybe_cyclic = false;
> > -
> >  	while (!list_empty(&unix_unvisited_vertices)) {
> >  		struct unix_vertex *vertex;
> >  		struct list_head scc;
> > @@ -546,15 +546,18 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
> >  				scc_dead = unix_vertex_dead(vertex);
> >  		}
> >  
> > -		if (scc_dead)
> > +		if (scc_dead) {
> >  			unix_collect_skb(&scc, hitlist);
> > -		else if (!unix_graph_maybe_cyclic)
> > -			unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
> > +			unix_graph_circles--;
> 
> Possibly WARN_ON_ONCE(unix_graph_circles < 0) ?

Will add in v2.

> 
> I find this patch a little scaring - meaning I can't understand it
> fully,
> I'm wondering if it would make any sense to postpone this patch
> to the next cycle?

It's fine by me to postpone patch 2 - 5, but it would be appreciated
if patch 1 makes it to this cycle.

Thanks!

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 net-next 2/6] af_unix: Save the number of loops in inflight graph.
  2024-05-07 16:11     ` Kuniyuki Iwashima
@ 2024-05-08 10:08       ` Paolo Abeni
  2024-05-08 17:05         ` Kuniyuki Iwashima
  0 siblings, 1 reply; 11+ messages in thread
From: Paolo Abeni @ 2024-05-08 10:08 UTC (permalink / raw)
  To: Kuniyuki Iwashima; +Cc: davem, edumazet, kuba, kuni1840, netdev

On Tue, 2024-05-07 at 09:11 -0700, Kuniyuki Iwashima wrote:
> From: Paolo Abeni <pabeni@redhat.com>
> Date: Tue, 07 May 2024 15:54:33 +0200
> > On Fri, 2024-05-03 at 15:31 -0700, Kuniyuki Iwashima wrote:
> > > unix_walk_scc_fast() calls unix_scc_cyclic() for every SCC so that we
> > > can make unix_graph_maybe_cyclic false when all SCC are cleaned up.
> > > 
> > > If we count the number of loops in the graph during Tarjan's algorithm,
> > > we need not call unix_scc_cyclic() in unix_walk_scc_fast().
> > > 
> > > Instead, we can just decrement the number when calling unix_collect_skb()
> > > and update unix_graph_maybe_cyclic based on the count.
> > > 
> > > Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
> > > ---
> > >  net/unix/garbage.c | 19 +++++++++++--------
> > >  1 file changed, 11 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/net/unix/garbage.c b/net/unix/garbage.c
> > > index 1f8b8cdfcdc8..7ffb80dd422c 100644
> > > --- a/net/unix/garbage.c
> > > +++ b/net/unix/garbage.c
> > > @@ -405,6 +405,7 @@ static bool unix_scc_cyclic(struct list_head *scc)
> > >  
> > >  static LIST_HEAD(unix_visited_vertices);
> > >  static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2;
> > > +static unsigned long unix_graph_circles;
> > >  
> > >  static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_index,
> > >  			    struct sk_buff_head *hitlist)
> > > @@ -494,8 +495,8 @@ static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_inde
> > >  
> > >  		if (scc_dead)
> > >  			unix_collect_skb(&scc, hitlist);
> > > -		else if (!unix_graph_maybe_cyclic)
> > > -			unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
> > > +		else if (unix_scc_cyclic(&scc))
> > > +			unix_graph_circles++;
> > >  
> > >  		list_del(&scc);
> > >  	}
> > > @@ -509,7 +510,7 @@ static void unix_walk_scc(struct sk_buff_head *hitlist)
> > >  {
> > >  	unsigned long last_index = UNIX_VERTEX_INDEX_START;
> > >  
> > > -	unix_graph_maybe_cyclic = false;
> > > +	unix_graph_circles = 0;
> > >  
> > >  	/* Visit every vertex exactly once.
> > >  	 * __unix_walk_scc() moves visited vertices to unix_visited_vertices.
> > > @@ -524,13 +525,12 @@ static void unix_walk_scc(struct sk_buff_head *hitlist)
> > >  	list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
> > >  	swap(unix_vertex_unvisited_index, unix_vertex_grouped_index);
> > >  
> > > +	unix_graph_maybe_cyclic = !!unix_graph_circles;
> > >  	unix_graph_grouped = true;
> > >  }
> > >  
> > >  static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
> > >  {
> > > -	unix_graph_maybe_cyclic = false;
> > > -
> > >  	while (!list_empty(&unix_unvisited_vertices)) {
> > >  		struct unix_vertex *vertex;
> > >  		struct list_head scc;
> > > @@ -546,15 +546,18 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
> > >  				scc_dead = unix_vertex_dead(vertex);
> > >  		}
> > >  
> > > -		if (scc_dead)
> > > +		if (scc_dead) {
> > >  			unix_collect_skb(&scc, hitlist);
> > > -		else if (!unix_graph_maybe_cyclic)
> > > -			unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
> > > +			unix_graph_circles--;
> > 
> > Possibly WARN_ON_ONCE(unix_graph_circles < 0) ?
> 
> Will add in v2.
> 
> > 
> > I find this patch a little scaring - meaning I can't understand it
> > fully,
> > I'm wondering if it would make any sense to postpone this patch
> > to the next cycle?
> 
> It's fine by me to postpone patch 2 - 5, but it would be appreciated
> if patch 1 makes it to this cycle.

Yes, patch 1 looks fine and safe to me. Feel free to re-submit that one
individually right now, with my Acked-by tag.

Thanks!

Paolo



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 net-next 2/6] af_unix: Save the number of loops in inflight graph.
  2024-05-08 10:08       ` Paolo Abeni
@ 2024-05-08 17:05         ` Kuniyuki Iwashima
  0 siblings, 0 replies; 11+ messages in thread
From: Kuniyuki Iwashima @ 2024-05-08 17:05 UTC (permalink / raw)
  To: pabeni; +Cc: davem, edumazet, kuba, kuni1840, kuniyu, netdev

From: Paolo Abeni <pabeni@redhat.com>
Date: Wed, 08 May 2024 12:08:48 +0200
> On Tue, 2024-05-07 at 09:11 -0700, Kuniyuki Iwashima wrote:
> > From: Paolo Abeni <pabeni@redhat.com>
> > Date: Tue, 07 May 2024 15:54:33 +0200
> > > On Fri, 2024-05-03 at 15:31 -0700, Kuniyuki Iwashima wrote:
> > > > unix_walk_scc_fast() calls unix_scc_cyclic() for every SCC so that we
> > > > can make unix_graph_maybe_cyclic false when all SCC are cleaned up.
> > > > 
> > > > If we count the number of loops in the graph during Tarjan's algorithm,
> > > > we need not call unix_scc_cyclic() in unix_walk_scc_fast().
> > > > 
> > > > Instead, we can just decrement the number when calling unix_collect_skb()
> > > > and update unix_graph_maybe_cyclic based on the count.
> > > > 
> > > > Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
> > > > ---
> > > >  net/unix/garbage.c | 19 +++++++++++--------
> > > >  1 file changed, 11 insertions(+), 8 deletions(-)
> > > > 
> > > > diff --git a/net/unix/garbage.c b/net/unix/garbage.c
> > > > index 1f8b8cdfcdc8..7ffb80dd422c 100644
> > > > --- a/net/unix/garbage.c
> > > > +++ b/net/unix/garbage.c
> > > > @@ -405,6 +405,7 @@ static bool unix_scc_cyclic(struct list_head *scc)
> > > >  
> > > >  static LIST_HEAD(unix_visited_vertices);
> > > >  static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2;
> > > > +static unsigned long unix_graph_circles;
> > > >  
> > > >  static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_index,
> > > >  			    struct sk_buff_head *hitlist)
> > > > @@ -494,8 +495,8 @@ static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_inde
> > > >  
> > > >  		if (scc_dead)
> > > >  			unix_collect_skb(&scc, hitlist);
> > > > -		else if (!unix_graph_maybe_cyclic)
> > > > -			unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
> > > > +		else if (unix_scc_cyclic(&scc))
> > > > +			unix_graph_circles++;
> > > >  
> > > >  		list_del(&scc);
> > > >  	}
> > > > @@ -509,7 +510,7 @@ static void unix_walk_scc(struct sk_buff_head *hitlist)
> > > >  {
> > > >  	unsigned long last_index = UNIX_VERTEX_INDEX_START;
> > > >  
> > > > -	unix_graph_maybe_cyclic = false;
> > > > +	unix_graph_circles = 0;
> > > >  
> > > >  	/* Visit every vertex exactly once.
> > > >  	 * __unix_walk_scc() moves visited vertices to unix_visited_vertices.
> > > > @@ -524,13 +525,12 @@ static void unix_walk_scc(struct sk_buff_head *hitlist)
> > > >  	list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
> > > >  	swap(unix_vertex_unvisited_index, unix_vertex_grouped_index);
> > > >  
> > > > +	unix_graph_maybe_cyclic = !!unix_graph_circles;
> > > >  	unix_graph_grouped = true;
> > > >  }
> > > >  
> > > >  static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
> > > >  {
> > > > -	unix_graph_maybe_cyclic = false;
> > > > -
> > > >  	while (!list_empty(&unix_unvisited_vertices)) {
> > > >  		struct unix_vertex *vertex;
> > > >  		struct list_head scc;
> > > > @@ -546,15 +546,18 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
> > > >  				scc_dead = unix_vertex_dead(vertex);
> > > >  		}
> > > >  
> > > > -		if (scc_dead)
> > > > +		if (scc_dead) {
> > > >  			unix_collect_skb(&scc, hitlist);
> > > > -		else if (!unix_graph_maybe_cyclic)
> > > > -			unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
> > > > +			unix_graph_circles--;
> > > 
> > > Possibly WARN_ON_ONCE(unix_graph_circles < 0) ?
> > 
> > Will add in v2.
> > 
> > > 
> > > I find this patch a little scaring - meaning I can't understand it
> > > fully,
> > > I'm wondering if it would make any sense to postpone this patch
> > > to the next cycle?
> > 
> > It's fine by me to postpone patch 2 - 5, but it would be appreciated
> > if patch 1 makes it to this cycle.
> 
> Yes, patch 1 looks fine and safe to me. Feel free to re-submit that one
> individually right now, with my Acked-by tag.

Thanks, will do!

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2024-05-08 17:05 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-03 22:31 [PATCH v1 net-next 0/6] af_unix: GC cleanup and optimisation Kuniyuki Iwashima
2024-05-03 22:31 ` [PATCH v1 net-next 1/6] af_unix: Add dead flag to struct scm_fp_list Kuniyuki Iwashima
2024-05-03 22:31 ` [PATCH v1 net-next 2/6] af_unix: Save the number of loops in inflight graph Kuniyuki Iwashima
2024-05-07 13:54   ` Paolo Abeni
2024-05-07 16:11     ` Kuniyuki Iwashima
2024-05-08 10:08       ` Paolo Abeni
2024-05-08 17:05         ` Kuniyuki Iwashima
2024-05-03 22:31 ` [PATCH v1 net-next 3/6] af_unix: Manage inflight graph state as unix_graph_state Kuniyuki Iwashima
2024-05-03 22:31 ` [PATCH v1 net-next 4/6] af_unix: Move wait_for_unix_gc() to unix_prepare_fpl() Kuniyuki Iwashima
2024-05-03 22:31 ` [PATCH v1 net-next 5/6] af_unix: Schedule GC based on graph state during sendmsg() Kuniyuki Iwashima
2024-05-03 22:31 ` [PATCH v1 net-next 6/6] af_unix: Schedule GC only if loop exists during close() Kuniyuki Iwashima

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).