* [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful
@ 2015-09-25 16:09 Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 1/7] genirq: Fix the documentation of request_percpu_irq Gregory CLEMENT
` (8 more replies)
0 siblings, 9 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2015-09-25 16:09 UTC (permalink / raw)
To: Thomas Gleixner, Jason Cooper, linux-kernel, David S. Miller,
netdev, Thomas Petazzoni
Cc: Andrew Lunn, Sebastian Hesselbarth, Gregory CLEMENT, Lior Amsalem,
Tawfik Bayouk, Nadav Haklai, Ezequiel Garcia, Maxime Ripard,
Boris BREZILLON, Willy Tarreau, linux-arm-kernel
Hi,
As stated in the first version: "this patchset reworks the Marvell
neta driver in order to really support its per-CPU interrupts, instead
of faking them as SPI, and allow the use of any RX queue instead of
the hardcoded RX queue 0 that we have currently."
Following the review which has been done, Maxime started adding the
CPU hotplug support. I continued his work a few weeks ago and here is
the result.
Since the 1st version the main change is this CPU hotplug support, in
order to validate it I powered up and down the CPUs while performing
iperf. I ran the tests during hours: the kernel didn't crash and the
network interfaces were still usable. Of course it impacted the
performance, but continuously power down and up the CPUs is not
something we usually do.
I also reorganized the series, the 3 first patches should go through
the irq subsystem, whereas the 4 others should go to the network
subsystem.
However, there is a runtime dependency between the two parts. Patch 5
depend on the patch 3 to be able to use the percpu irq.
Thanks,
Gregory
PS: Thanks to Willy who gave me some pointers on how to deal with the
NAPI.
Maxime Ripard (7):
genirq: Fix the documentation of request_percpu_irq
irq: Export per-cpu irq allocation and de-allocation functions
irqchip: armada-370-xp: Rework per-cpu interrupts handling
net: mvneta: Fix CPU_MAP registers initialisation
net: mvneta: Handle per-cpu interrupts
net: mvneta: Allow different queues
net: mvneta: Statically assign queues to CPUs
drivers/irqchip/irq-armada-370-xp.c | 14 +-
drivers/net/ethernet/marvell/mvneta.c | 316 +++++++++++++++++++++++-----------
kernel/irq/manage.c | 9 +-
3 files changed, 226 insertions(+), 113 deletions(-)
--
2.1.0
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v2 1/7] genirq: Fix the documentation of request_percpu_irq
2015-09-25 16:09 [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful Gregory CLEMENT
@ 2015-09-25 16:09 ` Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 2/7] irq: Export per-cpu irq allocation and de-allocation functions Gregory CLEMENT
` (7 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2015-09-25 16:09 UTC (permalink / raw)
To: Thomas Gleixner, Jason Cooper, linux-kernel, David S. Miller,
netdev, Thomas Petazzoni
Cc: Andrew Lunn, Sebastian Hesselbarth, Gregory CLEMENT, Lior Amsalem,
Tawfik Bayouk, Nadav Haklai, Ezequiel Garcia, Maxime Ripard,
Boris BREZILLON, Willy Tarreau, linux-arm-kernel
From: Maxime Ripard <maxime.ripard@free-electrons.com>
The documentation of request_percpu_irq is confusing and suggest that the
interrupt is not enabled at all, while it is actually enabled on the local
CPU.
Clarify that.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
kernel/irq/manage.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index ad1b064f94fe..dc8a80ecfc4a 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1790,9 +1790,10 @@ int setup_percpu_irq(unsigned int irq, struct irqaction *act)
* @devname: An ascii name for the claiming device
* @dev_id: A percpu cookie passed back to the handler function
*
- * This call allocates interrupt resources, but doesn't
- * automatically enable the interrupt. It has to be done on each
- * CPU using enable_percpu_irq().
+ * This call allocates interrupt resources and enables the
+ * interrupt on the local CPU. If the interrupt is supposed to be
+ * enabled on other CPUs, it has to be done on each CPU using
+ * enable_percpu_irq().
*
* Dev_id must be globally unique. It is a per-cpu variable, and
* the handler gets called with the interrupted CPU's instance of
--
2.1.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 2/7] irq: Export per-cpu irq allocation and de-allocation functions
2015-09-25 16:09 [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 1/7] genirq: Fix the documentation of request_percpu_irq Gregory CLEMENT
@ 2015-09-25 16:09 ` Gregory CLEMENT
2015-09-30 14:56 ` Thomas Petazzoni
2015-09-25 16:09 ` [PATCH v2 3/7] irqchip: armada-370-xp: Rework per-cpu interrupts handling Gregory CLEMENT
` (6 subsequent siblings)
8 siblings, 1 reply; 14+ messages in thread
From: Gregory CLEMENT @ 2015-09-25 16:09 UTC (permalink / raw)
To: Thomas Gleixner, Jason Cooper, linux-kernel, David S. Miller,
netdev, Thomas Petazzoni
Cc: Lior Amsalem, Andrew Lunn, Tawfik Bayouk, Boris BREZILLON,
Nadav Haklai, Ezequiel Garcia, Gregory CLEMENT, Maxime Ripard,
Willy Tarreau, linux-arm-kernel, Sebastian Hesselbarth
From: Maxime Ripard <maxime.ripard@free-electrons.com>
Some drivers might use the per-cpu interrupts and still might be built as a
module. Export request_percpu_irq an free_percpu_irq to these user, which
also make it consistent with enable/disable_percpu_irq that were exported.
Reported-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
kernel/irq/manage.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index dc8a80ecfc4a..89440d2f6c07 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1761,6 +1761,7 @@ void free_percpu_irq(unsigned int irq, void __percpu *dev_id)
kfree(__free_percpu_irq(irq, dev_id));
chip_bus_sync_unlock(desc);
}
+EXPORT_SYMBOL_GPL(free_percpu_irq);
/**
* setup_percpu_irq - setup a per-cpu interrupt
@@ -1832,6 +1833,7 @@ int request_percpu_irq(unsigned int irq, irq_handler_t handler,
return retval;
}
+EXPORT_SYMBOL_GPL(request_percpu_irq);
/**
* irq_get_irqchip_state - returns the irqchip state of a interrupt.
--
2.1.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 3/7] irqchip: armada-370-xp: Rework per-cpu interrupts handling
2015-09-25 16:09 [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 1/7] genirq: Fix the documentation of request_percpu_irq Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 2/7] irq: Export per-cpu irq allocation and de-allocation functions Gregory CLEMENT
@ 2015-09-25 16:09 ` Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 4/7] net: mvneta: Fix CPU_MAP registers initialisation Gregory CLEMENT
` (5 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2015-09-25 16:09 UTC (permalink / raw)
To: Thomas Gleixner, Jason Cooper, linux-kernel, David S. Miller,
netdev, Thomas Petazzoni
Cc: Andrew Lunn, Sebastian Hesselbarth, Gregory CLEMENT, Lior Amsalem,
Tawfik Bayouk, Nadav Haklai, Ezequiel Garcia, Maxime Ripard,
Boris BREZILLON, Willy Tarreau, linux-arm-kernel
From: Maxime Ripard <maxime.ripard@free-electrons.com>
The MPIC driver currently has a list of interrupts to handle as per-cpu.
Since the timer, fabric and neta interrupts were the only per-cpu
interrupts in the system, we can now remove the switch and just check for
the hardware irq number to determine whether a given interrupt is per-cpu
or not.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
drivers/irqchip/irq-armada-370-xp.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c
index 39b72da0c143..117848b46eaa 100644
--- a/drivers/irqchip/irq-armada-370-xp.c
+++ b/drivers/irqchip/irq-armada-370-xp.c
@@ -56,9 +56,6 @@
#define ARMADA_370_XP_MAX_PER_CPU_IRQS (28)
-#define ARMADA_370_XP_TIMER0_PER_CPU_IRQ (5)
-#define ARMADA_370_XP_FABRIC_IRQ (3)
-
#define IPI_DOORBELL_START (0)
#define IPI_DOORBELL_END (8)
#define IPI_DOORBELL_MASK 0xFF
@@ -81,13 +78,10 @@ static phys_addr_t msi_doorbell_addr;
static inline bool is_percpu_irq(irq_hw_number_t irq)
{
- switch (irq) {
- case ARMADA_370_XP_TIMER0_PER_CPU_IRQ:
- case ARMADA_370_XP_FABRIC_IRQ:
+ if (irq <= ARMADA_370_XP_MAX_PER_CPU_IRQS)
return true;
- default:
- return false;
- }
+
+ return false;
}
/*
@@ -551,7 +545,7 @@ static void armada_370_xp_mpic_resume(void)
if (virq == 0)
continue;
- if (irq != ARMADA_370_XP_TIMER0_PER_CPU_IRQ)
+ if (!is_percpu_irq(irq))
writel(irq, per_cpu_int_base +
ARMADA_370_XP_INT_CLEAR_MASK_OFFS);
else
--
2.1.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 4/7] net: mvneta: Fix CPU_MAP registers initialisation
2015-09-25 16:09 [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful Gregory CLEMENT
` (2 preceding siblings ...)
2015-09-25 16:09 ` [PATCH v2 3/7] irqchip: armada-370-xp: Rework per-cpu interrupts handling Gregory CLEMENT
@ 2015-09-25 16:09 ` Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 5/7] net: mvneta: Handle per-cpu interrupts Gregory CLEMENT
` (4 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2015-09-25 16:09 UTC (permalink / raw)
To: Thomas Gleixner, Jason Cooper, linux-kernel, David S. Miller,
netdev, Thomas Petazzoni
Cc: Andrew Lunn, Sebastian Hesselbarth, Gregory CLEMENT, Lior Amsalem,
Tawfik Bayouk, Nadav Haklai, Ezequiel Garcia, Maxime Ripard,
Boris BREZILLON, Willy Tarreau, linux-arm-kernel, stable
From: Maxime Ripard <maxime.ripard@free-electrons.com>
The CPU_MAP register is duplicated for each CPUs at different addresses,
each instance being at a different address.
However, the code so far was using CONFIG_NR_CPUS to initialise the CPU_MAP
registers for each registers, while the SoCs embed at most 4 CPUs.
This is especially an issue with multi_v7_defconfig, where CONFIG_NR_CPUS
is currently set to 16, resulting in writes to registers that are not
CPU_MAP.
Fixes: c5aff18204da ("net: mvneta: driver for Marvell Armada 370/XP network unit")
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: <stable@vger.kernel.org> # v3.8+
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
drivers/net/ethernet/marvell/mvneta.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index fe2299ac4f5c..ea587d964c46 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -949,7 +949,7 @@ static void mvneta_defaults_set(struct mvneta_port *pp)
/* Set CPU queue access map - all CPUs have access to all RX
* queues and to all TX queues
*/
- for (cpu = 0; cpu < CONFIG_NR_CPUS; cpu++)
+ for_each_present_cpu(cpu)
mvreg_write(pp, MVNETA_CPU_MAP(cpu),
(MVNETA_CPU_RXQ_ACCESS_ALL_MASK |
MVNETA_CPU_TXQ_ACCESS_ALL_MASK));
--
2.1.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 5/7] net: mvneta: Handle per-cpu interrupts
2015-09-25 16:09 [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful Gregory CLEMENT
` (3 preceding siblings ...)
2015-09-25 16:09 ` [PATCH v2 4/7] net: mvneta: Fix CPU_MAP registers initialisation Gregory CLEMENT
@ 2015-09-25 16:09 ` Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 6/7] net: mvneta: Allow different queues Gregory CLEMENT
` (3 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2015-09-25 16:09 UTC (permalink / raw)
To: Thomas Gleixner, Jason Cooper, linux-kernel, David S. Miller,
netdev, Thomas Petazzoni
Cc: Andrew Lunn, Sebastian Hesselbarth, Gregory CLEMENT, Lior Amsalem,
Tawfik Bayouk, Nadav Haklai, Ezequiel Garcia, Maxime Ripard,
Boris BREZILLON, Willy Tarreau, linux-arm-kernel
From: Maxime Ripard <maxime.ripard@free-electrons.com>
Now that our interrupt controller is allowing us to use per-CPU interrupts,
actually use it in the mvneta driver.
This involves obviously reworking the driver to have a CPU-local NAPI
structure, and report for incoming packet using that structure.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
drivers/net/ethernet/marvell/mvneta.c | 91 ++++++++++++++++++++++++-----------
1 file changed, 62 insertions(+), 29 deletions(-)
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index ea587d964c46..a78ca1d6e414 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -285,7 +285,21 @@ struct mvneta_pcpu_stats {
u64 tx_bytes;
};
+struct mvneta_pcpu_port {
+ /* Pointer to the shared port */
+ struct mvneta_port *pp;
+
+ /* Pointer to the CPU-local NAPI struct */
+ struct napi_struct napi;
+
+ /* Cause of the previous interrupt */
+ u32 cause_rx_tx;
+};
+
struct mvneta_port {
+ struct mvneta_pcpu_port __percpu *ports;
+ struct mvneta_pcpu_stats __percpu *stats;
+
int pkt_size;
unsigned int frag_size;
void __iomem *base;
@@ -293,15 +307,11 @@ struct mvneta_port {
struct mvneta_tx_queue *txqs;
struct net_device *dev;
- u32 cause_rx_tx;
- struct napi_struct napi;
-
/* Core clock */
struct clk *clk;
u8 mcast_count[256];
u16 tx_ring_size;
u16 rx_ring_size;
- struct mvneta_pcpu_stats *stats;
struct mii_bus *mii_bus;
struct phy_device *phy_dev;
@@ -1461,6 +1471,7 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
static int mvneta_rx(struct mvneta_port *pp, int rx_todo,
struct mvneta_rx_queue *rxq)
{
+ struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
struct net_device *dev = pp->dev;
int rx_done;
u32 rcvd_pkts = 0;
@@ -1513,7 +1524,7 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo,
skb->protocol = eth_type_trans(skb, dev);
mvneta_rx_csum(pp, rx_status, skb);
- napi_gro_receive(&pp->napi, skb);
+ napi_gro_receive(&port->napi, skb);
rcvd_pkts++;
rcvd_bytes += rx_bytes;
@@ -1548,7 +1559,7 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo,
mvneta_rx_csum(pp, rx_status, skb);
- napi_gro_receive(&pp->napi, skb);
+ napi_gro_receive(&port->napi, skb);
}
if (rcvd_pkts) {
@@ -2059,12 +2070,11 @@ static void mvneta_set_rx_mode(struct net_device *dev)
/* Interrupt handling - the callback for request_irq() */
static irqreturn_t mvneta_isr(int irq, void *dev_id)
{
- struct mvneta_port *pp = (struct mvneta_port *)dev_id;
+ struct mvneta_pcpu_port *port = (struct mvneta_pcpu_port *)dev_id;
- /* Mask all interrupts */
- mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0);
+ disable_percpu_irq(port->pp->dev->irq);
- napi_schedule(&pp->napi);
+ napi_schedule(&port->napi);
return IRQ_HANDLED;
}
@@ -2102,11 +2112,11 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
{
int rx_done = 0;
u32 cause_rx_tx;
- unsigned long flags;
struct mvneta_port *pp = netdev_priv(napi->dev);
+ struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
if (!netif_running(pp->dev)) {
- napi_complete(napi);
+ napi_complete(&port->napi);
return rx_done;
}
@@ -2133,7 +2143,7 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
/* For the case where the last mvneta_poll did not process all
* RX packets
*/
- cause_rx_tx |= pp->cause_rx_tx;
+ cause_rx_tx |= port->cause_rx_tx;
if (rxq_number > 1) {
while ((cause_rx_tx & MVNETA_RX_INTR_MASK_ALL) && (budget > 0)) {
int count;
@@ -2164,16 +2174,11 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
if (budget > 0) {
cause_rx_tx = 0;
- napi_complete(napi);
- local_irq_save(flags);
- mvreg_write(pp, MVNETA_INTR_NEW_MASK,
- MVNETA_RX_INTR_MASK(rxq_number) |
- MVNETA_TX_INTR_MASK(txq_number) |
- MVNETA_MISCINTR_INTR_MASK);
- local_irq_restore(flags);
+ napi_complete(&port->napi);
+ enable_percpu_irq(pp->dev->irq, 0);
}
- pp->cause_rx_tx = cause_rx_tx;
+ port->cause_rx_tx = cause_rx_tx;
return rx_done;
}
@@ -2422,6 +2427,8 @@ static int mvneta_setup_txqs(struct mvneta_port *pp)
static void mvneta_start_dev(struct mvneta_port *pp)
{
+ unsigned int cpu;
+
mvneta_max_rx_size_set(pp, pp->pkt_size);
mvneta_txq_max_tx_size_set(pp, pp->pkt_size);
@@ -2429,7 +2436,11 @@ static void mvneta_start_dev(struct mvneta_port *pp)
mvneta_port_enable(pp);
/* Enable polling on the port */
- napi_enable(&pp->napi);
+ for_each_present_cpu(cpu) {
+ struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu);
+
+ napi_enable(&port->napi);
+ }
/* Unmask interrupts */
mvreg_write(pp, MVNETA_INTR_NEW_MASK,
@@ -2447,9 +2458,15 @@ static void mvneta_start_dev(struct mvneta_port *pp)
static void mvneta_stop_dev(struct mvneta_port *pp)
{
+ unsigned int cpu;
+
phy_stop(pp->phy_dev);
- napi_disable(&pp->napi);
+ for_each_present_cpu(cpu) {
+ struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu);
+
+ napi_disable(&port->napi);
+ }
netif_carrier_off(pp->dev);
@@ -2707,8 +2724,8 @@ static int mvneta_open(struct net_device *dev)
goto err_cleanup_rxqs;
/* Connect to port interrupt line */
- ret = request_irq(pp->dev->irq, mvneta_isr, 0,
- MVNETA_DRIVER_NAME, pp);
+ ret = request_percpu_irq(pp->dev->irq, mvneta_isr,
+ MVNETA_DRIVER_NAME, pp->ports);
if (ret) {
netdev_err(pp->dev, "cannot request irq %d\n", pp->dev->irq);
goto err_cleanup_txqs;
@@ -2728,7 +2745,7 @@ static int mvneta_open(struct net_device *dev)
return 0;
err_free_irq:
- free_irq(pp->dev->irq, pp);
+ free_percpu_irq(pp->dev->irq, pp->ports);
err_cleanup_txqs:
mvneta_cleanup_txqs(pp);
err_cleanup_rxqs:
@@ -2743,7 +2760,7 @@ static int mvneta_stop(struct net_device *dev)
mvneta_stop_dev(pp);
mvneta_mdio_remove(pp);
- free_irq(dev->irq, pp);
+ free_percpu_irq(dev->irq, pp->ports);
mvneta_cleanup_rxqs(pp);
mvneta_cleanup_txqs(pp);
@@ -3030,6 +3047,7 @@ static int mvneta_probe(struct platform_device *pdev)
const char *managed;
int phy_mode;
int err;
+ int cpu;
/* Our multiqueue support is not complete, so for now, only
* allow the usage of the first RX queue
@@ -3105,11 +3123,18 @@ static int mvneta_probe(struct platform_device *pdev)
goto err_clk;
}
+ /* Alloc per-cpu port structure */
+ pp->ports = alloc_percpu(struct mvneta_pcpu_port);
+ if (!pp->ports) {
+ err = -ENOMEM;
+ goto err_clk;
+ }
+
/* Alloc per-cpu stats */
pp->stats = netdev_alloc_pcpu_stats(struct mvneta_pcpu_stats);
if (!pp->stats) {
err = -ENOMEM;
- goto err_clk;
+ goto err_free_ports;
}
dt_mac_addr = of_get_mac_address(dn);
@@ -3150,7 +3175,12 @@ static int mvneta_probe(struct platform_device *pdev)
if (dram_target_info)
mvneta_conf_mbus_windows(pp, dram_target_info);
- netif_napi_add(dev, &pp->napi, mvneta_poll, NAPI_POLL_WEIGHT);
+ for_each_present_cpu(cpu) {
+ struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu);
+
+ netif_napi_add(dev, &port->napi, mvneta_poll, NAPI_POLL_WEIGHT);
+ port->pp = pp;
+ }
dev->features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO;
dev->hw_features |= dev->features;
@@ -3179,6 +3209,8 @@ static int mvneta_probe(struct platform_device *pdev)
err_free_stats:
free_percpu(pp->stats);
+err_free_ports:
+ free_percpu(pp->ports);
err_clk:
clk_disable_unprepare(pp->clk);
err_put_phy_node:
@@ -3198,6 +3230,7 @@ static int mvneta_remove(struct platform_device *pdev)
unregister_netdev(dev);
clk_disable_unprepare(pp->clk);
+ free_percpu(pp->ports);
free_percpu(pp->stats);
irq_dispose_mapping(dev->irq);
of_node_put(pp->phy_node);
--
2.1.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 6/7] net: mvneta: Allow different queues
2015-09-25 16:09 [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful Gregory CLEMENT
` (4 preceding siblings ...)
2015-09-25 16:09 ` [PATCH v2 5/7] net: mvneta: Handle per-cpu interrupts Gregory CLEMENT
@ 2015-09-25 16:09 ` Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 7/7] net: mvneta: Statically assign queues to CPUs Gregory CLEMENT
` (2 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2015-09-25 16:09 UTC (permalink / raw)
To: Thomas Gleixner, Jason Cooper, linux-kernel, David S. Miller,
netdev, Thomas Petazzoni
Cc: Andrew Lunn, Sebastian Hesselbarth, Gregory CLEMENT, Lior Amsalem,
Tawfik Bayouk, Nadav Haklai, Ezequiel Garcia, Maxime Ripard,
Boris BREZILLON, Willy Tarreau, linux-arm-kernel
From: Maxime Ripard <maxime.ripard@free-electrons.com>
The mvneta driver allows to change the default RX queue trough the rxq_def
kernel parameter.
However, the current code doesn't allow to have any value but 0. It is
actively checked for in the driver's probe because the drivers makes a
number of assumption and takes a number of shortcuts in order to just use
that RX queue.
Remove these limitations in order to be able to specify any available
queue.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
drivers/net/ethernet/marvell/mvneta.c | 80 +++++------------------------------
1 file changed, 11 insertions(+), 69 deletions(-)
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index a78ca1d6e414..401d018a96b8 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -478,7 +478,7 @@ struct mvneta_rx_queue {
/* The hardware supports eight (8) rx queues, but we are only allowing
* the first one to be used. Therefore, let's just allocate one queue.
*/
-static int rxq_number = 1;
+static int rxq_number = 8;
static int txq_number = 8;
static int rxq_def;
@@ -766,14 +766,7 @@ static void mvneta_port_up(struct mvneta_port *pp)
mvreg_write(pp, MVNETA_TXQ_CMD, q_map);
/* Enable all initialized RXQs. */
- q_map = 0;
- for (queue = 0; queue < rxq_number; queue++) {
- struct mvneta_rx_queue *rxq = &pp->rxqs[queue];
- if (rxq->descs != NULL)
- q_map |= (1 << queue);
- }
-
- mvreg_write(pp, MVNETA_RXQ_CMD, q_map);
+ mvreg_write(pp, MVNETA_RXQ_CMD, BIT(rxq_def));
}
/* Stop the Ethernet port activity */
@@ -1436,17 +1429,6 @@ static u32 mvneta_skb_tx_csum(struct mvneta_port *pp, struct sk_buff *skb)
return MVNETA_TX_L4_CSUM_NOT;
}
-/* Returns rx queue pointer (find last set bit) according to causeRxTx
- * value
- */
-static struct mvneta_rx_queue *mvneta_rx_policy(struct mvneta_port *pp,
- u32 cause)
-{
- int queue = fls(cause >> 8) - 1;
-
- return (queue < 0 || queue >= rxq_number) ? NULL : &pp->rxqs[queue];
-}
-
/* Drop packets received by the RXQ and free buffers */
static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
struct mvneta_rx_queue *rxq)
@@ -2144,33 +2126,8 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
* RX packets
*/
cause_rx_tx |= port->cause_rx_tx;
- if (rxq_number > 1) {
- while ((cause_rx_tx & MVNETA_RX_INTR_MASK_ALL) && (budget > 0)) {
- int count;
- struct mvneta_rx_queue *rxq;
- /* get rx queue number from cause_rx_tx */
- rxq = mvneta_rx_policy(pp, cause_rx_tx);
- if (!rxq)
- break;
-
- /* process the packet in that rx queue */
- count = mvneta_rx(pp, budget, rxq);
- rx_done += count;
- budget -= count;
- if (budget > 0) {
- /* set off the rx bit of the
- * corresponding bit in the cause rx
- * tx register, so that next iteration
- * will find the next rx queue where
- * packets are received on
- */
- cause_rx_tx &= ~((1 << rxq->id) << 8);
- }
- }
- } else {
- rx_done = mvneta_rx(pp, budget, &pp->rxqs[rxq_def]);
- budget -= rx_done;
- }
+ rx_done = mvneta_rx(pp, budget, &pp->rxqs[rxq_def]);
+ budget -= rx_done;
if (budget > 0) {
cause_rx_tx = 0;
@@ -2382,26 +2339,19 @@ static void mvneta_cleanup_txqs(struct mvneta_port *pp)
/* Cleanup all Rx queues */
static void mvneta_cleanup_rxqs(struct mvneta_port *pp)
{
- int queue;
-
- for (queue = 0; queue < rxq_number; queue++)
- mvneta_rxq_deinit(pp, &pp->rxqs[queue]);
+ mvneta_rxq_deinit(pp, &pp->rxqs[rxq_def]);
}
/* Init all Rx queues */
static int mvneta_setup_rxqs(struct mvneta_port *pp)
{
- int queue;
-
- for (queue = 0; queue < rxq_number; queue++) {
- int err = mvneta_rxq_init(pp, &pp->rxqs[queue]);
- if (err) {
- netdev_err(pp->dev, "%s: can't create rxq=%d\n",
- __func__, queue);
- mvneta_cleanup_rxqs(pp);
- return err;
- }
+ int err = mvneta_rxq_init(pp, &pp->rxqs[rxq_def]);
+ if (err) {
+ netdev_err(pp->dev, "%s: can't create rxq=%d\n",
+ __func__, rxq_def);
+ mvneta_cleanup_rxqs(pp);
+ return err;
}
return 0;
@@ -3049,14 +2999,6 @@ static int mvneta_probe(struct platform_device *pdev)
int err;
int cpu;
- /* Our multiqueue support is not complete, so for now, only
- * allow the usage of the first RX queue
- */
- if (rxq_def != 0) {
- dev_err(&pdev->dev, "Invalid rxq_def argument: %d\n", rxq_def);
- return -EINVAL;
- }
-
dev = alloc_etherdev_mqs(sizeof(struct mvneta_port), txq_number, rxq_number);
if (!dev)
return -ENOMEM;
--
2.1.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 7/7] net: mvneta: Statically assign queues to CPUs
2015-09-25 16:09 [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful Gregory CLEMENT
` (5 preceding siblings ...)
2015-09-25 16:09 ` [PATCH v2 6/7] net: mvneta: Allow different queues Gregory CLEMENT
@ 2015-09-25 16:09 ` Gregory CLEMENT
2015-09-29 18:51 ` [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful David Miller
2015-09-30 14:53 ` Thomas Gleixner
8 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2015-09-25 16:09 UTC (permalink / raw)
To: Thomas Gleixner, Jason Cooper, linux-kernel, David S. Miller,
netdev, Thomas Petazzoni
Cc: Andrew Lunn, Sebastian Hesselbarth, Gregory CLEMENT, Lior Amsalem,
Tawfik Bayouk, Nadav Haklai, Ezequiel Garcia, Maxime Ripard,
Boris BREZILLON, Willy Tarreau, linux-arm-kernel
From: Maxime Ripard <maxime.ripard@free-electrons.com>
Since the switch to per-CPU interrupts, we lost the ability to set which
CPU was going to receive our RX interrupt, which was now only the CPU on
which the mvneta_open function was run.
We can now assign our queues to their respective CPUs, and make sure only
this CPU is going to handle our traffic.
This also paves the road to be able to change that at runtime, and later on
to support RSS.
[gregory.clement@free-electrons.com]: hardened the CPU hotplug support.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
drivers/net/ethernet/marvell/mvneta.c | 143 +++++++++++++++++++++++++++++++++-
1 file changed, 142 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 401d018a96b8..a16da728e549 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -32,6 +32,7 @@
#include <linux/of_address.h>
#include <linux/phy.h>
#include <linux/clk.h>
+#include <linux/cpu.h>
/* Registers */
#define MVNETA_RXQ_CONFIG_REG(q) (0x1400 + ((q) << 2))
@@ -306,6 +307,7 @@ struct mvneta_port {
struct mvneta_rx_queue *rxqs;
struct mvneta_tx_queue *txqs;
struct net_device *dev;
+ struct notifier_block cpu_notifier;
/* Core clock */
struct clk *clk;
@@ -2055,7 +2057,6 @@ static irqreturn_t mvneta_isr(int irq, void *dev_id)
struct mvneta_pcpu_port *port = (struct mvneta_pcpu_port *)dev_id;
disable_percpu_irq(port->pp->dev->irq);
-
napi_schedule(&port->napi);
return IRQ_HANDLED;
@@ -2656,6 +2657,125 @@ static void mvneta_mdio_remove(struct mvneta_port *pp)
pp->phy_dev = NULL;
}
+static void mvneta_percpu_enable(void *arg)
+{
+ struct mvneta_port *pp = arg;
+
+ enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE);
+}
+
+static void mvneta_percpu_disable(void *arg)
+{
+ struct mvneta_port *pp = arg;
+
+ disable_percpu_irq(pp->dev->irq);
+}
+
+static void mvneta_percpu_elect(struct mvneta_port *pp)
+{
+ int online_cpu_idx, cpu, i = 0;
+
+ online_cpu_idx = rxq_def % num_online_cpus();
+
+ for_each_online_cpu(cpu) {
+ if (i == online_cpu_idx)
+ /* Enable per-CPU interrupt on the one CPU we
+ * just elected
+ */
+ smp_call_function_single(cpu, mvneta_percpu_enable,
+ pp, true);
+ else
+ /* Disable per-CPU interrupt on all the other CPU */
+ smp_call_function_single(cpu, mvneta_percpu_disable,
+ pp, true);
+ i++;
+ }
+};
+
+static int mvneta_percpu_notifier(struct notifier_block *nfb,
+ unsigned long action, void *hcpu)
+{
+ struct mvneta_port *pp = container_of(nfb, struct mvneta_port,
+ cpu_notifier);
+ int cpu = (unsigned long)hcpu, other_cpu;
+ struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu);
+
+ switch (action) {
+ case CPU_ONLINE:
+ case CPU_ONLINE_FROZEN:
+ netif_tx_stop_all_queues(pp->dev);
+
+ /* We have to synchronise on tha napi of each CPU
+ * except the one just being waked up
+ */
+ for_each_online_cpu(other_cpu) {
+ if (other_cpu != cpu) {
+ struct mvneta_pcpu_port *other_port =
+ per_cpu_ptr(pp->ports, other_cpu);
+
+ napi_synchronize(&other_port->napi);
+ }
+ }
+
+ /* Mask all ethernet port interrupts */
+ mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0);
+ mvreg_write(pp, MVNETA_INTR_OLD_MASK, 0);
+ mvreg_write(pp, MVNETA_INTR_MISC_MASK, 0);
+ napi_enable(&port->napi);
+
+ /* Enable per-CPU interrupt on the one CPU we care
+ * about.
+ */
+ mvneta_percpu_elect(pp);
+
+ /* Unmask all ethernet port interrupts */
+ mvreg_write(pp, MVNETA_INTR_NEW_MASK,
+ MVNETA_RX_INTR_MASK(rxq_number) |
+ MVNETA_TX_INTR_MASK(txq_number) |
+ MVNETA_MISCINTR_INTR_MASK);
+ mvreg_write(pp, MVNETA_INTR_MISC_MASK,
+ MVNETA_CAUSE_PHY_STATUS_CHANGE |
+ MVNETA_CAUSE_LINK_CHANGE |
+ MVNETA_CAUSE_PSC_SYNC_CHANGE);
+ netif_tx_start_all_queues(pp->dev);
+ break;
+ case CPU_DOWN_PREPARE:
+ case CPU_DOWN_PREPARE_FROZEN:
+ netif_tx_stop_all_queues(pp->dev);
+ /* Mask all ethernet port interrupts */
+ mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0);
+ mvreg_write(pp, MVNETA_INTR_OLD_MASK, 0);
+ mvreg_write(pp, MVNETA_INTR_MISC_MASK, 0);
+
+ napi_synchronize(&port->napi);
+ napi_disable(&port->napi);
+ /* Disable per-CPU interrupts on the CPU that is
+ * brought down.
+ */
+ smp_call_function_single(cpu, mvneta_percpu_disable,
+ pp, true);
+
+ break;
+ case CPU_DEAD:
+ case CPU_DEAD_FROZEN:
+ /* Check if a new CPU must be elected now this on is down */
+ mvneta_percpu_elect(pp);
+ /* Unmask all ethernet port interrupts */
+ mvreg_write(pp, MVNETA_INTR_NEW_MASK,
+ MVNETA_RX_INTR_MASK(rxq_number) |
+ MVNETA_TX_INTR_MASK(txq_number) |
+ MVNETA_MISCINTR_INTR_MASK);
+ mvreg_write(pp, MVNETA_INTR_MISC_MASK,
+ MVNETA_CAUSE_PHY_STATUS_CHANGE |
+ MVNETA_CAUSE_LINK_CHANGE |
+ MVNETA_CAUSE_PSC_SYNC_CHANGE);
+ netif_tx_start_all_queues(pp->dev);
+ break;
+ }
+
+ return NOTIFY_OK;
+}
+
static int mvneta_open(struct net_device *dev)
{
struct mvneta_port *pp = netdev_priv(dev);
@@ -2681,6 +2801,22 @@ static int mvneta_open(struct net_device *dev)
goto err_cleanup_txqs;
}
+ /* Even though the documentation says that request_percpu_irq
+ * doesn't enable the interrupts automatically, it actually
+ * does so on the local CPU.
+ *
+ * Make sure it's disabled.
+ */
+ mvneta_percpu_disable(pp);
+
+ /* Elect a CPU to handle our RX queue interrupt */
+ mvneta_percpu_elect(pp);
+
+ /* Register a CPU notifier to handle the case where our CPU
+ * might be taken offline.
+ */
+ register_cpu_notifier(&pp->cpu_notifier);
+
/* In default link is down */
netif_carrier_off(pp->dev);
@@ -2707,9 +2843,13 @@ err_cleanup_rxqs:
static int mvneta_stop(struct net_device *dev)
{
struct mvneta_port *pp = netdev_priv(dev);
+ int cpu;
mvneta_stop_dev(pp);
mvneta_mdio_remove(pp);
+ unregister_cpu_notifier(&pp->cpu_notifier);
+ for_each_present_cpu(cpu)
+ smp_call_function_single(cpu, mvneta_percpu_disable, pp, true);
free_percpu_irq(dev->irq, pp->ports);
mvneta_cleanup_rxqs(pp);
mvneta_cleanup_txqs(pp);
@@ -3049,6 +3189,7 @@ static int mvneta_probe(struct platform_device *pdev)
err = of_property_read_string(dn, "managed", &managed);
pp->use_inband_status = (err == 0 &&
strcmp(managed, "in-band-status") == 0);
+ pp->cpu_notifier.notifier_call = mvneta_percpu_notifier;
pp->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(pp->clk)) {
--
2.1.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful
2015-09-25 16:09 [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful Gregory CLEMENT
` (6 preceding siblings ...)
2015-09-25 16:09 ` [PATCH v2 7/7] net: mvneta: Statically assign queues to CPUs Gregory CLEMENT
@ 2015-09-29 18:51 ` David Miller
2015-09-30 14:56 ` Thomas Gleixner
2015-09-30 14:53 ` Thomas Gleixner
8 siblings, 1 reply; 14+ messages in thread
From: David Miller @ 2015-09-29 18:51 UTC (permalink / raw)
To: gregory.clement
Cc: tglx, jason, linux-kernel, netdev, thomas.petazzoni, andrew,
sebastian.hesselbarth, alior, tawfik, nadavh, ezequiel.garcia,
maxime.ripard, boris.brezillon, w, linux-arm-kernel
From: Gregory CLEMENT <gregory.clement@free-electrons.com>
Date: Fri, 25 Sep 2015 18:09:31 +0200
> As stated in the first version: "this patchset reworks the Marvell
> neta driver in order to really support its per-CPU interrupts, instead
> of faking them as SPI, and allow the use of any RX queue instead of
> the hardcoded RX queue 0 that we have currently."
Series applied, thanks.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful
2015-09-25 16:09 [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful Gregory CLEMENT
` (7 preceding siblings ...)
2015-09-29 18:51 ` [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful David Miller
@ 2015-09-30 14:53 ` Thomas Gleixner
8 siblings, 0 replies; 14+ messages in thread
From: Thomas Gleixner @ 2015-09-30 14:53 UTC (permalink / raw)
To: Gregory CLEMENT
Cc: Jason Cooper, linux-kernel, David S. Miller, netdev,
Thomas Petazzoni, Andrew Lunn, Sebastian Hesselbarth,
Lior Amsalem, Tawfik Bayouk, Nadav Haklai, Ezequiel Garcia,
Maxime Ripard, Boris BREZILLON, Willy Tarreau, linux-arm-kernel
On Fri, 25 Sep 2015, Gregory CLEMENT wrote:
> I also reorganized the series, the 3 first patches should go through
> the irq subsystem, whereas the 4 others should go to the network
> subsystem.
>
> However, there is a runtime dependency between the two parts. Patch 5
> depend on the patch 3 to be able to use the percpu irq.
The first 3 are good to go from my side. To avoid merge window
dependencies, I'll create a branch irq/for-net with these 3 patches
and merge that into irq/core.
So if the network folks want to pick up the rest, they can merge in
that branch as well.
Thanks,
tglx
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful
2015-09-29 18:51 ` [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful David Miller
@ 2015-09-30 14:56 ` Thomas Gleixner
2015-09-30 15:40 ` David Miller
0 siblings, 1 reply; 14+ messages in thread
From: Thomas Gleixner @ 2015-09-30 14:56 UTC (permalink / raw)
To: David Miller
Cc: gregory.clement, jason, linux-kernel, netdev, thomas.petazzoni,
andrew, sebastian.hesselbarth, alior, tawfik, nadavh,
ezequiel.garcia, maxime.ripard, boris.brezillon, w,
linux-arm-kernel
On Tue, 29 Sep 2015, David Miller wrote:
> From: Gregory CLEMENT <gregory.clement@free-electrons.com>
> Date: Fri, 25 Sep 2015 18:09:31 +0200
>
> > As stated in the first version: "this patchset reworks the Marvell
> > neta driver in order to really support its per-CPU interrupts, instead
> > of faking them as SPI, and allow the use of any RX queue instead of
> > the hardcoded RX queue 0 that we have currently."
>
> Series applied, thanks.
You could have had the courtesy to wait for an ack for the core irq
parts at least....
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/7] irq: Export per-cpu irq allocation and de-allocation functions
2015-09-25 16:09 ` [PATCH v2 2/7] irq: Export per-cpu irq allocation and de-allocation functions Gregory CLEMENT
@ 2015-09-30 14:56 ` Thomas Petazzoni
0 siblings, 0 replies; 14+ messages in thread
From: Thomas Petazzoni @ 2015-09-30 14:56 UTC (permalink / raw)
To: Gregory CLEMENT
Cc: Thomas Gleixner, Jason Cooper, linux-kernel, David S. Miller,
netdev, Andrew Lunn, Sebastian Hesselbarth, Lior Amsalem,
Tawfik Bayouk, Nadav Haklai, Ezequiel Garcia, Maxime Ripard,
Boris BREZILLON, Willy Tarreau, linux-arm-kernel
Hello,
Some minor typos, quite certainly not worth respining a new iteration
of the series.
On Fri, 25 Sep 2015 18:09:33 +0200, Gregory CLEMENT wrote:
> From: Maxime Ripard <maxime.ripard@free-electrons.com>
>
> Some drivers might use the per-cpu interrupts and still might be built as a
> module. Export request_percpu_irq an free_percpu_irq to these user, which
an -> and
these user -> these users
Best regards,
Thomas
--
Thomas Petazzoni, CTO, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful
2015-09-30 14:56 ` Thomas Gleixner
@ 2015-09-30 15:40 ` David Miller
2015-09-30 17:39 ` Thomas Gleixner
0 siblings, 1 reply; 14+ messages in thread
From: David Miller @ 2015-09-30 15:40 UTC (permalink / raw)
To: tglx
Cc: gregory.clement, jason, linux-kernel, netdev, thomas.petazzoni,
andrew, sebastian.hesselbarth, alior, tawfik, nadavh,
ezequiel.garcia, maxime.ripard, boris.brezillon, w,
linux-arm-kernel
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 30 Sep 2015 16:56:06 +0200 (CEST)
> On Tue, 29 Sep 2015, David Miller wrote:
>> From: Gregory CLEMENT <gregory.clement@free-electrons.com>
>> Date: Fri, 25 Sep 2015 18:09:31 +0200
>>
>> > As stated in the first version: "this patchset reworks the Marvell
>> > neta driver in order to really support its per-CPU interrupts, instead
>> > of faking them as SPI, and allow the use of any RX queue instead of
>> > the hardcoded RX queue 0 that we have currently."
>>
>> Series applied, thanks.
>
> You could have had the courtesy to wait for an ack for the core irq
> parts at least....
Sorry, my impression was that those parts were already discussed and
agreed upon.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful
2015-09-30 15:40 ` David Miller
@ 2015-09-30 17:39 ` Thomas Gleixner
0 siblings, 0 replies; 14+ messages in thread
From: Thomas Gleixner @ 2015-09-30 17:39 UTC (permalink / raw)
To: David Miller
Cc: gregory.clement, jason, linux-kernel, netdev, thomas.petazzoni,
andrew, sebastian.hesselbarth, alior, tawfik, nadavh,
ezequiel.garcia, maxime.ripard, boris.brezillon, w,
linux-arm-kernel
On Wed, 30 Sep 2015, David Miller wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> Date: Wed, 30 Sep 2015 16:56:06 +0200 (CEST)
>
> > On Tue, 29 Sep 2015, David Miller wrote:
> >> From: Gregory CLEMENT <gregory.clement@free-electrons.com>
> >> Date: Fri, 25 Sep 2015 18:09:31 +0200
> >>
> >> > As stated in the first version: "this patchset reworks the Marvell
> >> > neta driver in order to really support its per-CPU interrupts, instead
> >> > of faking them as SPI, and allow the use of any RX queue instead of
> >> > the hardcoded RX queue 0 that we have currently."
> >>
> >> Series applied, thanks.
> >
> > You could have had the courtesy to wait for an ack for the core irq
> > parts at least....
>
> Sorry, my impression was that those parts were already discussed and
> agreed upon.
No problem. I would have preferred to merge them to a separate branch
which you could have pulled so we don't end up with conflicts on
further changes in that area. But it's ok as it is. The patches are
good to go.
Thanks,
tglx
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2015-09-30 17:39 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-25 16:09 [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 1/7] genirq: Fix the documentation of request_percpu_irq Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 2/7] irq: Export per-cpu irq allocation and de-allocation functions Gregory CLEMENT
2015-09-30 14:56 ` Thomas Petazzoni
2015-09-25 16:09 ` [PATCH v2 3/7] irqchip: armada-370-xp: Rework per-cpu interrupts handling Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 4/7] net: mvneta: Fix CPU_MAP registers initialisation Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 5/7] net: mvneta: Handle per-cpu interrupts Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 6/7] net: mvneta: Allow different queues Gregory CLEMENT
2015-09-25 16:09 ` [PATCH v2 7/7] net: mvneta: Statically assign queues to CPUs Gregory CLEMENT
2015-09-29 18:51 ` [PATCH v2 0/7] net: mvneta: Switch to per-CPU irq and make rxq_def useful David Miller
2015-09-30 14:56 ` Thomas Gleixner
2015-09-30 15:40 ` David Miller
2015-09-30 17:39 ` Thomas Gleixner
2015-09-30 14:53 ` Thomas Gleixner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).