* [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath
[not found] <20160207.142526.1252110536030712971.davem@davemloft.net>
@ 2016-02-08 12:14 ` Jesper Dangaard Brouer
2016-02-08 12:14 ` [net-next PATCH V2 1/3] net: bulk free infrastructure for NAPI context, use napi_consume_skb Jesper Dangaard Brouer
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Jesper Dangaard Brouer @ 2016-02-08 12:14 UTC (permalink / raw)
To: netdev, Jeff Kirsher
Cc: Andrew Morton, tom, Alexander Duyck, alexei.starovoitov, linux-mm,
Jesper Dangaard Brouer, Christoph Lameter, David S. Miller
This patchset is the first real use-case for kmem_cache bulk _free_.
The use of bulk _alloc_ is NOT included in this patchset. The full use
have previously been posted here [1].
The bulk free side have the largest benefit for the network stack
use-case, because network stack is hitting the kmem_cache/SLUB
slowpath when freeing SKBs, due to the amount of outstanding SKBs.
This is solved by using the new API kmem_cache_free_bulk().
Introduce new API napi_consume_skb(), that hides/handles bulk freeing
for the caller. The drivers simply need to use this call when freeing
SKBs in NAPI context, e.g. replacing their calles to dev_kfree_skb() /
dev_consume_skb_any().
Driver ixgbe is the first user of this new API.
[1] http://thread.gmane.org/gmane.linux.network/384302/focus=397373
---
Jesper Dangaard Brouer (3):
net: bulk free infrastructure for NAPI context, use napi_consume_skb
net: bulk free SKBs that were delay free'ed due to IRQ context
ixgbe: bulk free SKBs during TX completion cleanup cycle
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 6 +-
include/linux/skbuff.h | 4 +
net/core/dev.c | 9 ++-
net/core/skbuff.c | 87 +++++++++++++++++++++++--
4 files changed, 96 insertions(+), 10 deletions(-)
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [net-next PATCH V2 1/3] net: bulk free infrastructure for NAPI context, use napi_consume_skb
2016-02-08 12:14 ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath Jesper Dangaard Brouer
@ 2016-02-08 12:14 ` Jesper Dangaard Brouer
2016-02-08 12:15 ` [net-next PATCH V2 2/3] net: bulk free SKBs that were delay free'ed due to IRQ context Jesper Dangaard Brouer
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Jesper Dangaard Brouer @ 2016-02-08 12:14 UTC (permalink / raw)
To: netdev, Jeff Kirsher
Cc: Andrew Morton, tom, Alexander Duyck, alexei.starovoitov, linux-mm,
Jesper Dangaard Brouer, Christoph Lameter, David S. Miller
Discovered that network stack were hitting the kmem_cache/SLUB
slowpath when freeing SKBs. Doing bulk free with kmem_cache_free_bulk
can speedup this slowpath.
NAPI context is a bit special, lets take advantage of that for bulk
free'ing SKBs.
In NAPI context we are running in softirq, which gives us certain
protection. A softirq can run on several CPUs at once. BUT the
important part is a softirq will never preempt another softirq running
on the same CPU. This gives us the opportunity to access per-cpu
variables in softirq context.
Extend napi_alloc_cache (before only contained page_frag_cache) to be
a struct with a small array based stack for holding SKBs. Introduce a
SKB defer and flush API for accessing this.
Introduce napi_consume_skb() as replacement for e.g. dev_consume_skb_any()
when running in NAPI context. A small trick to handle/detect if we
are called from netpoll is to see if budget is 0. In that case, we
need to invoke dev_consume_skb_irq().
Joint work with Alexander Duyck.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
---
include/linux/skbuff.h | 3 ++
net/core/dev.c | 1 +
net/core/skbuff.c | 83 +++++++++++++++++++++++++++++++++++++++++++++---
3 files changed, 81 insertions(+), 6 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 11f935c1a090..3c8d348223d7 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -2399,6 +2399,9 @@ static inline struct sk_buff *napi_alloc_skb(struct napi_struct *napi,
{
return __napi_alloc_skb(napi, length, GFP_ATOMIC);
}
+void napi_consume_skb(struct sk_buff *skb, int budget);
+
+void __kfree_skb_flush(void);
/**
* __dev_alloc_pages - allocate page for network Rx
diff --git a/net/core/dev.c b/net/core/dev.c
index 8cba3d852f25..44384a8c9613 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5152,6 +5152,7 @@ static void net_rx_action(struct softirq_action *h)
}
}
+ __kfree_skb_flush();
local_irq_disable();
list_splice_tail_init(&sd->poll_list, &list);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index b2df375ec9c2..e26bb2b1dba4 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -347,8 +347,16 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
}
EXPORT_SYMBOL(build_skb);
+#define NAPI_SKB_CACHE_SIZE 64
+
+struct napi_alloc_cache {
+ struct page_frag_cache page;
+ size_t skb_count;
+ void *skb_cache[NAPI_SKB_CACHE_SIZE];
+};
+
static DEFINE_PER_CPU(struct page_frag_cache, netdev_alloc_cache);
-static DEFINE_PER_CPU(struct page_frag_cache, napi_alloc_cache);
+static DEFINE_PER_CPU(struct napi_alloc_cache, napi_alloc_cache);
static void *__netdev_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
{
@@ -378,9 +386,9 @@ EXPORT_SYMBOL(netdev_alloc_frag);
static void *__napi_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
{
- struct page_frag_cache *nc = this_cpu_ptr(&napi_alloc_cache);
+ struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
- return __alloc_page_frag(nc, fragsz, gfp_mask);
+ return __alloc_page_frag(&nc->page, fragsz, gfp_mask);
}
void *napi_alloc_frag(unsigned int fragsz)
@@ -474,7 +482,7 @@ EXPORT_SYMBOL(__netdev_alloc_skb);
struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
gfp_t gfp_mask)
{
- struct page_frag_cache *nc = this_cpu_ptr(&napi_alloc_cache);
+ struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
struct sk_buff *skb;
void *data;
@@ -494,7 +502,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
if (sk_memalloc_socks())
gfp_mask |= __GFP_MEMALLOC;
- data = __alloc_page_frag(nc, len, gfp_mask);
+ data = __alloc_page_frag(&nc->page, len, gfp_mask);
if (unlikely(!data))
return NULL;
@@ -505,7 +513,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
}
/* use OR instead of assignment to avoid clearing of bits in mask */
- if (nc->pfmemalloc)
+ if (nc->page.pfmemalloc)
skb->pfmemalloc = 1;
skb->head_frag = 1;
@@ -747,6 +755,69 @@ void consume_skb(struct sk_buff *skb)
}
EXPORT_SYMBOL(consume_skb);
+void __kfree_skb_flush(void)
+{
+ struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
+
+ /* flush skb_cache if containing objects */
+ if (nc->skb_count) {
+ kmem_cache_free_bulk(skbuff_head_cache, nc->skb_count,
+ nc->skb_cache);
+ nc->skb_count = 0;
+ }
+}
+
+static void __kfree_skb_defer(struct sk_buff *skb)
+{
+ struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
+
+ /* drop skb->head and call any destructors for packet */
+ skb_release_all(skb);
+
+ /* record skb to CPU local list */
+ nc->skb_cache[nc->skb_count++] = skb;
+
+#ifdef CONFIG_SLUB
+ /* SLUB writes into objects when freeing */
+ prefetchw(skb);
+#endif
+
+ /* flush skb_cache if it is filled */
+ if (unlikely(nc->skb_count == NAPI_SKB_CACHE_SIZE)) {
+ kmem_cache_free_bulk(skbuff_head_cache, NAPI_SKB_CACHE_SIZE,
+ nc->skb_cache);
+ nc->skb_count = 0;
+ }
+}
+
+void napi_consume_skb(struct sk_buff *skb, int budget)
+{
+ if (unlikely(!skb))
+ return;
+
+ /* if budget is 0 assume netpoll w/ IRQs disabled */
+ if (unlikely(!budget)) {
+ dev_consume_skb_irq(skb);
+ return;
+ }
+
+ if (likely(atomic_read(&skb->users) == 1))
+ smp_rmb();
+ else if (likely(!atomic_dec_and_test(&skb->users)))
+ return;
+ /* if reaching here SKB is ready to free */
+ trace_consume_skb(skb);
+
+ /* if SKB is a clone, don't handle this case */
+ if (unlikely(skb->fclone != SKB_FCLONE_UNAVAILABLE)) {
+ __kfree_skb(skb);
+ return;
+ }
+
+ __kfree_skb_defer(skb);
+}
+EXPORT_SYMBOL(napi_consume_skb);
+
/* Make sure a field is enclosed inside headers_start/headers_end section */
#define CHECK_SKB_FIELD(field) \
BUILD_BUG_ON(offsetof(struct sk_buff, field) < \
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [net-next PATCH V2 2/3] net: bulk free SKBs that were delay free'ed due to IRQ context
2016-02-08 12:14 ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath Jesper Dangaard Brouer
2016-02-08 12:14 ` [net-next PATCH V2 1/3] net: bulk free infrastructure for NAPI context, use napi_consume_skb Jesper Dangaard Brouer
@ 2016-02-08 12:15 ` Jesper Dangaard Brouer
2016-02-08 12:15 ` [net-next PATCH V2 3/3] ixgbe: bulk free SKBs during TX completion cleanup cycle Jesper Dangaard Brouer
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Jesper Dangaard Brouer @ 2016-02-08 12:15 UTC (permalink / raw)
To: netdev, Jeff Kirsher
Cc: Andrew Morton, tom, Alexander Duyck, alexei.starovoitov, linux-mm,
Jesper Dangaard Brouer, Christoph Lameter, David S. Miller
The network stack defers SKBs free, in-case free happens in IRQ or
when IRQs are disabled. This happens in __dev_kfree_skb_irq() that
writes SKBs that were free'ed during IRQ to the softirq completion
queue (softnet_data.completion_queue).
These SKBs are naturally delayed, and cleaned up during NET_TX_SOFTIRQ
in function net_tx_action(). Take advantage of this a use the skb
defer and flush API, as we are already in softirq context.
For modern drivers this rarely happens. Although most drivers do call
dev_kfree_skb_any(), which detects the situation and calls
__dev_kfree_skb_irq() when needed. This due to netpoll can call from
IRQ context.
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
include/linux/skbuff.h | 1 +
net/core/dev.c | 8 +++++++-
net/core/skbuff.c | 8 ++++++--
3 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 3c8d348223d7..b06ba2e07c89 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -2402,6 +2402,7 @@ static inline struct sk_buff *napi_alloc_skb(struct napi_struct *napi,
void napi_consume_skb(struct sk_buff *skb, int budget);
void __kfree_skb_flush(void);
+void __kfree_skb_defer(struct sk_buff *skb);
/**
* __dev_alloc_pages - allocate page for network Rx
diff --git a/net/core/dev.c b/net/core/dev.c
index 44384a8c9613..b185d7eaa2e4 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3829,8 +3829,14 @@ static void net_tx_action(struct softirq_action *h)
trace_consume_skb(skb);
else
trace_kfree_skb(skb, net_tx_action);
- __kfree_skb(skb);
+
+ if (skb->fclone != SKB_FCLONE_UNAVAILABLE)
+ __kfree_skb(skb);
+ else
+ __kfree_skb_defer(skb);
}
+
+ __kfree_skb_flush();
}
if (sd->output_queue) {
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index e26bb2b1dba4..d278e51789e9 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -767,7 +767,7 @@ void __kfree_skb_flush(void)
}
}
-static void __kfree_skb_defer(struct sk_buff *skb)
+static inline void _kfree_skb_defer(struct sk_buff *skb)
{
struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
@@ -789,6 +789,10 @@ static void __kfree_skb_defer(struct sk_buff *skb)
nc->skb_count = 0;
}
}
+void __kfree_skb_defer(struct sk_buff *skb)
+{
+ _kfree_skb_defer(skb);
+}
void napi_consume_skb(struct sk_buff *skb, int budget)
{
@@ -814,7 +818,7 @@ void napi_consume_skb(struct sk_buff *skb, int budget)
return;
}
- __kfree_skb_defer(skb);
+ _kfree_skb_defer(skb);
}
EXPORT_SYMBOL(napi_consume_skb);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [net-next PATCH V2 3/3] ixgbe: bulk free SKBs during TX completion cleanup cycle
2016-02-08 12:14 ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath Jesper Dangaard Brouer
2016-02-08 12:14 ` [net-next PATCH V2 1/3] net: bulk free infrastructure for NAPI context, use napi_consume_skb Jesper Dangaard Brouer
2016-02-08 12:15 ` [net-next PATCH V2 2/3] net: bulk free SKBs that were delay free'ed due to IRQ context Jesper Dangaard Brouer
@ 2016-02-08 12:15 ` Jesper Dangaard Brouer
2016-02-11 16:59 ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath David Miller
2016-02-13 11:12 ` Tilman Schmidt
4 siblings, 0 replies; 6+ messages in thread
From: Jesper Dangaard Brouer @ 2016-02-08 12:15 UTC (permalink / raw)
To: netdev, Jeff Kirsher
Cc: Andrew Morton, tom, Alexander Duyck, alexei.starovoitov, linux-mm,
Jesper Dangaard Brouer, Christoph Lameter, David S. Miller
There is an opportunity to bulk free SKBs during reclaiming of
resources after DMA transmit completes in ixgbe_clean_tx_irq. Thus,
bulk freeing at this point does not introduce any added latency.
Simply use napi_consume_skb() which were recently introduced. The
napi_budget parameter is needed by napi_consume_skb() to detect if it
is called from netpoll.
Benchmarking IPv4-forwarding, on CPU i7-4790K @4.2GHz (no turbo boost)
Single CPU/flow numbers: before: 1982144 pps -> after : 2064446 pps
Improvement: +82302 pps, -20 nanosec, +4.1%
(SLUB and GCC version 5.1.1 20150618 (Red Hat 5.1.1-4))
Joint work with Alexander Duyck.
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index c4003a88bbf6..0c701b8438b6 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -1089,7 +1089,7 @@ static void ixgbe_tx_timeout_reset(struct ixgbe_adapter *adapter)
* @tx_ring: tx ring to clean
**/
static bool ixgbe_clean_tx_irq(struct ixgbe_q_vector *q_vector,
- struct ixgbe_ring *tx_ring)
+ struct ixgbe_ring *tx_ring, int napi_budget)
{
struct ixgbe_adapter *adapter = q_vector->adapter;
struct ixgbe_tx_buffer *tx_buffer;
@@ -1127,7 +1127,7 @@ static bool ixgbe_clean_tx_irq(struct ixgbe_q_vector *q_vector,
total_packets += tx_buffer->gso_segs;
/* free the skb */
- dev_consume_skb_any(tx_buffer->skb);
+ napi_consume_skb(tx_buffer->skb, napi_budget);
/* unmap skb header data */
dma_unmap_single(tx_ring->dev,
@@ -2784,7 +2784,7 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
#endif
ixgbe_for_each_ring(ring, q_vector->tx)
- clean_complete &= !!ixgbe_clean_tx_irq(q_vector, ring);
+ clean_complete &= !!ixgbe_clean_tx_irq(q_vector, ring, budget);
/* Exit if we are called by netpoll or busy polling is active */
if ((budget <= 0) || !ixgbe_qv_lock_napi(q_vector))
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath
2016-02-08 12:14 ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath Jesper Dangaard Brouer
` (2 preceding siblings ...)
2016-02-08 12:15 ` [net-next PATCH V2 3/3] ixgbe: bulk free SKBs during TX completion cleanup cycle Jesper Dangaard Brouer
@ 2016-02-11 16:59 ` David Miller
2016-02-13 11:12 ` Tilman Schmidt
4 siblings, 0 replies; 6+ messages in thread
From: David Miller @ 2016-02-11 16:59 UTC (permalink / raw)
To: brouer
Cc: netdev, jeffrey.t.kirsher, akpm, tom, alexander.duyck,
alexei.starovoitov, linux-mm, cl
From: Jesper Dangaard Brouer <brouer@redhat.com>
Date: Mon, 08 Feb 2016 13:14:54 +0100
> This patchset is the first real use-case for kmem_cache bulk _free_.
> The use of bulk _alloc_ is NOT included in this patchset. The full use
> have previously been posted here [1].
>
> The bulk free side have the largest benefit for the network stack
> use-case, because network stack is hitting the kmem_cache/SLUB
> slowpath when freeing SKBs, due to the amount of outstanding SKBs.
> This is solved by using the new API kmem_cache_free_bulk().
>
> Introduce new API napi_consume_skb(), that hides/handles bulk freeing
> for the caller. The drivers simply need to use this call when freeing
> SKBs in NAPI context, e.g. replacing their calles to dev_kfree_skb() /
> dev_consume_skb_any().
>
> Driver ixgbe is the first user of this new API.
>
> [1] http://thread.gmane.org/gmane.linux.network/384302/focus=397373
Series applied, thanks.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath
2016-02-08 12:14 ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath Jesper Dangaard Brouer
` (3 preceding siblings ...)
2016-02-11 16:59 ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath David Miller
@ 2016-02-13 11:12 ` Tilman Schmidt
4 siblings, 0 replies; 6+ messages in thread
From: Tilman Schmidt @ 2016-02-13 11:12 UTC (permalink / raw)
To: Jesper Dangaard Brouer
Cc: netdev, Jeff Kirsher, Andrew Morton, tom, Alexander Duyck,
alexei.starovoitov, linux-mm, Christoph Lameter, David S. Miller
[-- Attachment #1: Type: text/plain, Size: 558 bytes --]
Hi Jesper,
Am 08.02.2016 um 13:14 schrieb Jesper Dangaard Brouer:
> Introduce new API napi_consume_skb(), that hides/handles bulk freeing
> for the caller. The drivers simply need to use this call when freeing
> SKBs in NAPI context, e.g. replacing their calles to dev_kfree_skb() /
> dev_consume_skb_any().
Would you mind adding a kerneldoc comment for the new API function?
Thanks,
Tilman
--
Tilman Schmidt E-Mail: tilman@imap.cc
Bonn, Germany
Nous, on a des fleurs et des bougies pour nous protéger.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2016-02-13 11:13 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20160207.142526.1252110536030712971.davem@davemloft.net>
2016-02-08 12:14 ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath Jesper Dangaard Brouer
2016-02-08 12:14 ` [net-next PATCH V2 1/3] net: bulk free infrastructure for NAPI context, use napi_consume_skb Jesper Dangaard Brouer
2016-02-08 12:15 ` [net-next PATCH V2 2/3] net: bulk free SKBs that were delay free'ed due to IRQ context Jesper Dangaard Brouer
2016-02-08 12:15 ` [net-next PATCH V2 3/3] ixgbe: bulk free SKBs during TX completion cleanup cycle Jesper Dangaard Brouer
2016-02-11 16:59 ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath David Miller
2016-02-13 11:12 ` Tilman Schmidt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).