* [PATCH 1/3] pciehp: Prioritize data-link event over presence detect [not found] <1479544367-7195-1-git-send-email-ashok.raj@intel.com> @ 2016-11-19 8:32 ` Ashok Raj 2016-11-19 8:32 ` [PATCH 2/3] pciehp: Fix led status when enabling already enabled slot Ashok Raj 2016-11-19 8:32 ` [PATCH 3/3] pciehp: Fix race condition handling surprise link-down Ashok Raj 2 siblings, 0 replies; 9+ messages in thread From: Ashok Raj @ 2016-11-19 8:32 UTC (permalink / raw) To: linux-pci, Bjorn Helgaas; +Cc: Ashok Raj, Keith Busch, linux-kernel, stable This patch places higher precedence on a Data Link Layer Status Change event over the Presence Detect Change event in the case the slot status contains both events in the same interrupt. When both events are observed, the pciehp driver is currently relying on the Slot Status Presence Detect State (PDS) agree with the Link Status Data Link Layer Active status. The Presence Detect State, however, may be set to 1 through out-of-band presence detect mechanism even though the link is down, which creates conflicting events. Since the Link Status accurately reflects the reachability of the down stream bus, the link change event should take precedence over a present detect event. This patch skips checking the PDC status we handled a link event in the same handler. Cc: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Ashok Raj <ashok.raj@intel.com> Reviewed-by: Keith Busch <keith.busch@intel.com> --- drivers/pci/hotplug/pciehp_hpc.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c index b57fc6d..e1ebd67 100644 --- a/drivers/pci/hotplug/pciehp_hpc.c +++ b/drivers/pci/hotplug/pciehp_hpc.c @@ -620,8 +620,18 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id) pciehp_queue_interrupt_event(slot, INT_BUTTON_PRESS); } - /* Check Presence Detect Changed */ - if (events & PCI_EXP_SLTSTA_PDC) { + /* + * Check Link Status Changed at higher precedence than Presence Detect + * Changed. The PDS value may be set to "card present" from out-of-band + * detection, which may be in conflict with a Link Down and cause the + * wrong event to queue. + */ + if (events & PCI_EXP_SLTSTA_DLLSC) { + ctrl_info(ctrl, "Slot(%s): Link %s\n", slot_name(slot), + link ? "Up" : "Down"); + pciehp_queue_interrupt_event(slot, link ? INT_LINK_UP : + INT_LINK_DOWN); + } else if (events & PCI_EXP_SLTSTA_PDC) { present = !!(status & PCI_EXP_SLTSTA_PDS); ctrl_info(ctrl, "Slot(%s): Card %spresent\n", slot_name(slot), present ? "" : "not "); @@ -636,13 +646,6 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id) pciehp_queue_interrupt_event(slot, INT_POWER_FAULT); } - if (events & PCI_EXP_SLTSTA_DLLSC) { - ctrl_info(ctrl, "Slot(%s): Link %s\n", slot_name(slot), - link ? "Up" : "Down"); - pciehp_queue_interrupt_event(slot, link ? INT_LINK_UP : - INT_LINK_DOWN); - } - return IRQ_HANDLED; } -- 2.7.4 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/3] pciehp: Fix led status when enabling already enabled slot. [not found] <1479544367-7195-1-git-send-email-ashok.raj@intel.com> 2016-11-19 8:32 ` [PATCH 1/3] pciehp: Prioritize data-link event over presence detect Ashok Raj @ 2016-11-19 8:32 ` Ashok Raj 2016-11-19 8:32 ` [PATCH 3/3] pciehp: Fix race condition handling surprise link-down Ashok Raj 2 siblings, 0 replies; 9+ messages in thread From: Ashok Raj @ 2016-11-19 8:32 UTC (permalink / raw) To: linux-pci, Bjorn Helgaas; +Cc: Ashok Raj, Keith Busch, linux-kernel, stable If an error occurs enabling a slot on a hot plug event, the presence LED is disabled. It is not an error, though, when the slot was already enabled. This patch returns success if called to enable an already enabled slot. This is in the same spirit of the special handling for EEXISTS when pciehp_configure_device determines the slot devices already exist. Cc: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Ashok Raj <ashok.raj@intel.com> Reviewed-by: Keith Busch <keith.busch@intel.com> --- drivers/pci/hotplug/pciehp_ctrl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c index efe69e8..ec0b4c1 100644 --- a/drivers/pci/hotplug/pciehp_ctrl.c +++ b/drivers/pci/hotplug/pciehp_ctrl.c @@ -410,7 +410,7 @@ int pciehp_enable_slot(struct slot *p_slot) if (getstatus) { ctrl_info(ctrl, "Slot(%s): Already enabled\n", slot_name(p_slot)); - return -EINVAL; + return 0; } } -- 2.7.4 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/3] pciehp: Fix race condition handling surprise link-down [not found] <1479544367-7195-1-git-send-email-ashok.raj@intel.com> 2016-11-19 8:32 ` [PATCH 1/3] pciehp: Prioritize data-link event over presence detect Ashok Raj 2016-11-19 8:32 ` [PATCH 2/3] pciehp: Fix led status when enabling already enabled slot Ashok Raj @ 2016-11-19 8:32 ` Ashok Raj 2016-12-07 23:40 ` Bjorn Helgaas 2 siblings, 1 reply; 9+ messages in thread From: Ashok Raj @ 2016-11-19 8:32 UTC (permalink / raw) To: linux-pci, Bjorn Helgaas; +Cc: Ashok Raj, Keith Busch, linux-kernel, stable A surprise link down may retrain very quickly, causing the same slot to generate a link up event before handling the link down completes. Since the link is active, the power off work queued from the first link down will cause a second down event when the power is disabled. The second down event should be ignored because the slot is already powering off; however, the "link up" event sets the slot state to POWERON before the event to handle this is enqueued, making the second down event believe it needs to do something. This creates a constant link up and down event cycle. This patch fixes that by setting the slot state only when the work to handle the power event is executing, protected by the hot plug mutex. Cc: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Ashok Raj <ashok.raj@intel.com> Reviewed-by: Keith Busch <keith.busch@intel.com> --- drivers/pci/hotplug/pciehp_ctrl.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c index ec0b4c1..7ae068c 100644 --- a/drivers/pci/hotplug/pciehp_ctrl.c +++ b/drivers/pci/hotplug/pciehp_ctrl.c @@ -182,6 +182,7 @@ static void pciehp_power_thread(struct work_struct *work) switch (info->req) { case DISABLE_REQ: mutex_lock(&p_slot->hotplug_lock); + p_slot->state = POWEROFF_STATE; pciehp_disable_slot(p_slot); mutex_unlock(&p_slot->hotplug_lock); mutex_lock(&p_slot->lock); @@ -190,6 +191,7 @@ static void pciehp_power_thread(struct work_struct *work) break; case ENABLE_REQ: mutex_lock(&p_slot->hotplug_lock); + p_slot->state = POWERON_STATE; ret = pciehp_enable_slot(p_slot); mutex_unlock(&p_slot->hotplug_lock); if (ret) @@ -209,8 +211,6 @@ static void pciehp_queue_power_work(struct slot *p_slot, int req) { struct power_work_info *info; - p_slot->state = (req == ENABLE_REQ) ? POWERON_STATE : POWEROFF_STATE; - info = kmalloc(sizeof(*info), GFP_KERNEL); if (!info) { ctrl_err(p_slot->ctrl, "no memory to queue %s request\n", -- 2.7.4 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 3/3] pciehp: Fix race condition handling surprise link-down 2016-11-19 8:32 ` [PATCH 3/3] pciehp: Fix race condition handling surprise link-down Ashok Raj @ 2016-12-07 23:40 ` Bjorn Helgaas 2016-12-08 0:04 ` Keith Busch 0 siblings, 1 reply; 9+ messages in thread From: Bjorn Helgaas @ 2016-12-07 23:40 UTC (permalink / raw) To: Ashok Raj; +Cc: linux-pci, Bjorn Helgaas, Keith Busch, linux-kernel, stable On Sat, Nov 19, 2016 at 12:32:47AM -0800, Ashok Raj wrote: > A surprise link down may retrain very quickly, causing the same slot to > generate a link up event before handling the link down completes. > > Since the link is active, the power off work queued from the first link > down will cause a second down event when the power is disabled. The second > down event should be ignored because the slot is already powering off; > however, the "link up" event sets the slot state to POWERON before the > event to handle this is enqueued, making the second down event believe > it needs to do something. This creates a constant link up and down > event cycle. > > This patch fixes that by setting the slot state only when the work to > handle the power event is executing, protected by the hot plug mutex. Please mention the mutex specifically by name, e.g., "p_slot->hotplug_lock". > Cc: linux-kernel@vger.kernel.org > Cc: stable@vger.kernel.org > > Signed-off-by: Ashok Raj <ashok.raj@intel.com> > Reviewed-by: Keith Busch <keith.busch@intel.com> > --- > drivers/pci/hotplug/pciehp_ctrl.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c > index ec0b4c1..7ae068c 100644 > --- a/drivers/pci/hotplug/pciehp_ctrl.c > +++ b/drivers/pci/hotplug/pciehp_ctrl.c > @@ -182,6 +182,7 @@ static void pciehp_power_thread(struct work_struct *work) > switch (info->req) { > case DISABLE_REQ: > mutex_lock(&p_slot->hotplug_lock); > + p_slot->state = POWEROFF_STATE; It sounds right that p_slot->state should be protected. It looks like handle_button_press_event() and pciehp_sysfs_enable_slot() hold p_slot->lock while updating p_slot->state. You're setting "state = POWEROFF_STATE" while holding p_slot->hotplug_lock (not p_slot->lock). Four lines down, we set "state = STATIC_STATE", but this time we're holding p_slot->lock. What is the difference between the p_slot->lock and p_slot->hotplug_lock? Do we need both? How do we know which one to use? I'm very confused. > pciehp_disable_slot(p_slot); > mutex_unlock(&p_slot->hotplug_lock); > mutex_lock(&p_slot->lock); > @@ -190,6 +191,7 @@ static void pciehp_power_thread(struct work_struct *work) > break; > case ENABLE_REQ: > mutex_lock(&p_slot->hotplug_lock); > + p_slot->state = POWERON_STATE; > ret = pciehp_enable_slot(p_slot); > mutex_unlock(&p_slot->hotplug_lock); > if (ret) > @@ -209,8 +211,6 @@ static void pciehp_queue_power_work(struct slot *p_slot, int req) > { > struct power_work_info *info; > > - p_slot->state = (req == ENABLE_REQ) ? POWERON_STATE : POWEROFF_STATE; > - > info = kmalloc(sizeof(*info), GFP_KERNEL); > if (!info) { > ctrl_err(p_slot->ctrl, "no memory to queue %s request\n", > -- > 2.7.4 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-pci" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 3/3] pciehp: Fix race condition handling surprise link-down 2016-12-07 23:40 ` Bjorn Helgaas @ 2016-12-08 0:04 ` Keith Busch 2016-12-08 15:11 ` Bjorn Helgaas 0 siblings, 1 reply; 9+ messages in thread From: Keith Busch @ 2016-12-08 0:04 UTC (permalink / raw) To: Bjorn Helgaas; +Cc: Ashok Raj, linux-pci, Bjorn Helgaas, linux-kernel, stable On Wed, Dec 07, 2016 at 05:40:54PM -0600, Bjorn Helgaas wrote: > On Sat, Nov 19, 2016 at 12:32:47AM -0800, Ashok Raj wrote: > > --- a/drivers/pci/hotplug/pciehp_ctrl.c > > +++ b/drivers/pci/hotplug/pciehp_ctrl.c > > @@ -182,6 +182,7 @@ static void pciehp_power_thread(struct work_struct *work) > > switch (info->req) { > > case DISABLE_REQ: > > mutex_lock(&p_slot->hotplug_lock); > > + p_slot->state = POWEROFF_STATE; > > It sounds right that p_slot->state should be protected. > > It looks like handle_button_press_event() and > pciehp_sysfs_enable_slot() hold p_slot->lock while updating > p_slot->state. > > You're setting "state = POWEROFF_STATE" while holding > p_slot->hotplug_lock (not p_slot->lock). Four lines down, we set > "state = STATIC_STATE", but this time we're holding p_slot->lock. > > What is the difference between the p_slot->lock and > p_slot->hotplug_lock? Do we need both? How do we know which one to > use? > > I'm very confused. This is _very_ confusing. :) The p_slot->hotplug_lock serializes a power off and a power on event. We only want to set the state when one of those events gets to execute rather than when the event is queued. Changing the state when the event is queued can trigger a never ending up-down sequence. It currently looks safe to nest the p_slot->lock under the p_slot->hotplug_lock if that is you recommendation. Alternatively we could fix this if we used an ordered work queue for the slot's work, but that is a significantly more complex change. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 3/3] pciehp: Fix race condition handling surprise link-down 2016-12-08 0:04 ` Keith Busch @ 2016-12-08 15:11 ` Bjorn Helgaas 2016-12-08 17:20 ` Keith Busch 0 siblings, 1 reply; 9+ messages in thread From: Bjorn Helgaas @ 2016-12-08 15:11 UTC (permalink / raw) To: Keith Busch; +Cc: Ashok Raj, linux-pci, Bjorn Helgaas, linux-kernel, stable On Wed, Dec 07, 2016 at 07:04:33PM -0500, Keith Busch wrote: > On Wed, Dec 07, 2016 at 05:40:54PM -0600, Bjorn Helgaas wrote: > > On Sat, Nov 19, 2016 at 12:32:47AM -0800, Ashok Raj wrote: > > > --- a/drivers/pci/hotplug/pciehp_ctrl.c > > > +++ b/drivers/pci/hotplug/pciehp_ctrl.c > > > @@ -182,6 +182,7 @@ static void pciehp_power_thread(struct work_struct *work) > > > switch (info->req) { > > > case DISABLE_REQ: > > > mutex_lock(&p_slot->hotplug_lock); > > > + p_slot->state = POWEROFF_STATE; > > > > It sounds right that p_slot->state should be protected. > > > > It looks like handle_button_press_event() and > > pciehp_sysfs_enable_slot() hold p_slot->lock while updating > > p_slot->state. > > > > You're setting "state = POWEROFF_STATE" while holding > > p_slot->hotplug_lock (not p_slot->lock). Four lines down, we set > > "state = STATIC_STATE", but this time we're holding p_slot->lock. > > > > What is the difference between the p_slot->lock and > > p_slot->hotplug_lock? Do we need both? How do we know which one to > > use? > > > > I'm very confused. > > This is _very_ confusing. :) > > The p_slot->hotplug_lock serializes a power off and a power on event. We > only want to set the state when one of those events gets to execute > rather than when the event is queued. Changing the state when the event > is queued can trigger a never ending up-down sequence. > > It currently looks safe to nest the p_slot->lock under the > p_slot->hotplug_lock if that is you recommendation. I'm not making a recommendation; that would take a lot more thought than I've given this. There are at least three locks in pciehp: struct controller.ctrl_lock struct slot.lock struct slot.hotplug_lock There shouldn't really be any performance paths in pciehp, so I'm pretty doubtful that we need such complicated locking. > Alternatively we could fix this if we used an ordered work queue for > the slot's work, but that is a significantly more complex change. You mean we can currently execute things from the work queue in a different order than they were enqueued? That sounds ... difficult to analyze, to say the least. I don't know much about work queues, and Documentation/workqueue.txt doesn't mention alloc_ordered_workqueue(). Is that what you are referring to? Bjorn ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 3/3] pciehp: Fix race condition handling surprise link-down 2016-12-08 15:11 ` Bjorn Helgaas @ 2016-12-08 17:20 ` Keith Busch 2016-12-08 17:50 ` Bjorn Helgaas 0 siblings, 1 reply; 9+ messages in thread From: Keith Busch @ 2016-12-08 17:20 UTC (permalink / raw) To: Bjorn Helgaas; +Cc: Ashok Raj, linux-pci, Bjorn Helgaas, linux-kernel, stable On Thu, Dec 08, 2016 at 09:11:58AM -0600, Bjorn Helgaas wrote: > On Wed, Dec 07, 2016 at 07:04:33PM -0500, Keith Busch wrote: > > > > It currently looks safe to nest the p_slot->lock under the > > p_slot->hotplug_lock if that is you recommendation. > > I'm not making a recommendation; that would take a lot more thought > than I've given this. > > There are at least three locks in pciehp: > > struct controller.ctrl_lock > struct slot.lock > struct slot.hotplug_lock > > There shouldn't really be any performance paths in pciehp, so I'm > pretty doubtful that we need such complicated locking. They are protecting different things, but I agree it looks like room for simplification exists. > > Alternatively we could fix this if we used an ordered work queue for > > the slot's work, but that is a significantly more complex change. > > You mean we can currently execute things from the work queue in a > different order than they were enqueued? That sounds ... difficult to > analyze, to say the least. The events are dequeued in order, but they don't wait for the previous to complete, so pciehp's current work queue can have multiple events executing in parallel. That's part of why rapid pciehp slot events are a little more difficult to follow, and I think we may even be unsafely relying on the order the mutexes are obtained from these work events. Partly unrelated, we could process surprise removals significantly faster (microseconds vs. seconds), with the limited pci access series here, giving fewer simultaneously executing events to consider: https://www.spinics.net/lists/linux-pci/msg55585.html Do you have any concerns with that series? > I don't know much about work queues, and Documentation/workqueue.txt > doesn't mention alloc_ordered_workqueue(). Is that what you are > referring to? Yes, the alloc_ordered_workqueue is what I had in mind, though switching to that is not as simple as calling the different API. I am looking into that for longer term, but for the incremental fix, do you think we can go forward with Raj's proposal? ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 3/3] pciehp: Fix race condition handling surprise link-down 2016-12-08 17:20 ` Keith Busch @ 2016-12-08 17:50 ` Bjorn Helgaas 2016-12-09 21:11 ` Raj, Ashok 0 siblings, 1 reply; 9+ messages in thread From: Bjorn Helgaas @ 2016-12-08 17:50 UTC (permalink / raw) To: Keith Busch; +Cc: Ashok Raj, linux-pci, Bjorn Helgaas, linux-kernel, stable On Thu, Dec 08, 2016 at 12:20:58PM -0500, Keith Busch wrote: > On Thu, Dec 08, 2016 at 09:11:58AM -0600, Bjorn Helgaas wrote: > > On Wed, Dec 07, 2016 at 07:04:33PM -0500, Keith Busch wrote: > > > > > > It currently looks safe to nest the p_slot->lock under the > > > p_slot->hotplug_lock if that is you recommendation. > > > > I'm not making a recommendation; that would take a lot more thought > > than I've given this. > > > > There are at least three locks in pciehp: > > > > struct controller.ctrl_lock > > struct slot.lock > > struct slot.hotplug_lock > > > > There shouldn't really be any performance paths in pciehp, so I'm > > pretty doubtful that we need such complicated locking. > > They are protecting different things, but I agree it looks like room > for simplification exists. If we can't simplify this immediately, can we add a comment about what the different locks protect so people have a hint about which one to use? For example, it looks like this patch might benefit from that knowledge. > > > Alternatively we could fix this if we used an ordered work queue for > > > the slot's work, but that is a significantly more complex change. > > > > You mean we can currently execute things from the work queue in a > > different order than they were enqueued? That sounds ... difficult to > > analyze, to say the least. > > The events are dequeued in order, but they don't wait for the previous > to complete, so pciehp's current work queue can have multiple events > executing in parallel. That's part of why rapid pciehp slot events are > a little more difficult to follow, and I think we may even be unsafely > relying on the order the mutexes are obtained from these work events. Hmm. I certainly did not expect multiple events executing in parallel. That sounds like a pretty serious issue to me. > Partly unrelated, we could process surprise removals significantly > faster (microseconds vs. seconds), with the limited pci access series > here, giving fewer simultaneously executing events to consider: > > https://www.spinics.net/lists/linux-pci/msg55585.html > > Do you have any concerns with that series? I'm dragging my feet because I want the removal process to become simpler to understand, not more complicated, and we're exposing more issues that I didn't even know about. > > I don't know much about work queues, and Documentation/workqueue.txt > > doesn't mention alloc_ordered_workqueue(). Is that what you are > > referring to? > > Yes, the alloc_ordered_workqueue is what I had in mind, though switching > to that is not as simple as calling the different API. I am looking into > that for longer term, but for the incremental fix, do you think we can > go forward with Raj's proposal? I'd like to at least see a consistent locking strategy for protecting p_slot->state. All the existing updates are protected by p_slot->lock, but the one Raj is adding is protected by p_slot->hotplug_lock. Bjorn ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 3/3] pciehp: Fix race condition handling surprise link-down 2016-12-08 17:50 ` Bjorn Helgaas @ 2016-12-09 21:11 ` Raj, Ashok 0 siblings, 0 replies; 9+ messages in thread From: Raj, Ashok @ 2016-12-09 21:11 UTC (permalink / raw) To: Bjorn Helgaas Cc: Keith Busch, linux-pci, Bjorn Helgaas, linux-kernel, stable, ashok.raj Hi Bjorn Thanks. I have resent the last patch again with consistent lock usage as you had requested. I did attempt to make things a bit more easier to understand in one my earlier experiments, but that resulted in very substantial changes. Certainly something we should look in future to make the state machine more robust. Cheers, Ashok On Thu, Dec 08, 2016 at 11:50:09AM -0600, Bjorn Helgaas wrote: > > Yes, the alloc_ordered_workqueue is what I had in mind, though switching > > to that is not as simple as calling the different API. I am looking into > > that for longer term, but for the incremental fix, do you think we can > > go forward with Raj's proposal? > > I'd like to at least see a consistent locking strategy for protecting > p_slot->state. All the existing updates are protected by > p_slot->lock, but the one Raj is adding is protected by > p_slot->hotplug_lock. > > Bjorn ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2016-12-09 21:11 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1479544367-7195-1-git-send-email-ashok.raj@intel.com>
2016-11-19 8:32 ` [PATCH 1/3] pciehp: Prioritize data-link event over presence detect Ashok Raj
2016-11-19 8:32 ` [PATCH 2/3] pciehp: Fix led status when enabling already enabled slot Ashok Raj
2016-11-19 8:32 ` [PATCH 3/3] pciehp: Fix race condition handling surprise link-down Ashok Raj
2016-12-07 23:40 ` Bjorn Helgaas
2016-12-08 0:04 ` Keith Busch
2016-12-08 15:11 ` Bjorn Helgaas
2016-12-08 17:20 ` Keith Busch
2016-12-08 17:50 ` Bjorn Helgaas
2016-12-09 21:11 ` Raj, Ashok
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).