linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [openib-general] [PATCH 07/16] ehca: interrupt handling routines
       [not found] <4450A196.2050901@de.ibm.com>
@ 2006-05-04 21:29 ` Roland Dreier
  2006-05-05 13:05   ` Heiko J Schick
  2006-05-09 13:15 ` Christoph Hellwig
  1 sibling, 1 reply; 15+ messages in thread
From: Roland Dreier @ 2006-05-04 21:29 UTC (permalink / raw)
  To: Heiko J Schick
  Cc: openib-general, Christoph Raisch, Hoang-Nam Nguyen, Marcus Eder,
	linux-kernel, linuxppc-dev

 > +void ehca_queue_comp_task(struct ehca_comp_pool *pool, struct ehca_cq *__cq)
 > +{
 > +	int cpu;
 > +	int cpu_id;
 > +	struct ehca_cpu_comp_task *cct;
 > +	unsigned long flags_cct;
 > +	unsigned long flags_cq;
 > +
 > +	cpu = get_cpu();
 > +	cpu_id = find_next_online_cpu(pool);
 > +
 > +	EDEB_EN(7, "pool=%p cq=%p cq_nr=%x CPU=%x:%x:%x:%x",
 > +		pool, __cq, __cq->cq_number,
 > +		cpu, cpu_id, num_online_cpus(), num_possible_cpus());
 > +
 > +	BUG_ON(!cpu_online(cpu_id));
 > +
 > +	cct = per_cpu_ptr(pool->cpu_comp_tasks, cpu_id);
 > +
 > +	spin_lock_irqsave(&cct->task_lock, flags_cct);
 > +	spin_lock_irqsave(&__cq->task_lock, flags_cq);
 > +
 > +	if (__cq->nr_callbacks == 0) {
 > +		__cq->nr_callbacks++;
 > +		list_add_tail(&__cq->entry, &cct->cq_list);
 > +		wake_up(&cct->wait_queue);
 > +	}
 > +	else
 > +		__cq->nr_callbacks++;
 > +
 > +	spin_unlock_irqrestore(&__cq->task_lock, flags_cq);
 > +	spin_unlock_irqrestore(&cct->task_lock, flags_cct);
 > +
 > +	put_cpu();
 > +
 > +	EDEB_EX(7, "cct=%p", cct);
 > +
 > +	return;
 > +}

I never read the ehca completion event handling code very carefully
until now.  But I was motivated by Shirley's work on IPoIB to take a
closer look.

It seems that you are deferring completion event dispatch into threads
spread across all the CPUs.  This seems like a very strange thing to
me -- you are adding latency and possibly causing cacheline pingpong.

It may help throughput in some cases to spread the work across
multiple CPUs but it seems strange to me to do this in the low-level
driver.  My intuition would be that it would be better to do this in
the higher levels, and leave open the possibility for protocols that
want the lowest possible latency to be called directly from the
interrupt handler.

What was the thinking that led to this design?

 - R.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [openib-general] [PATCH 07/16] ehca: interrupt handling routines
  2006-05-04 21:29 ` [openib-general] [PATCH 07/16] ehca: interrupt handling routines Roland Dreier
@ 2006-05-05 13:05   ` Heiko J Schick
  2006-05-05 14:49     ` Roland Dreier
  0 siblings, 1 reply; 15+ messages in thread
From: Heiko J Schick @ 2006-05-05 13:05 UTC (permalink / raw)
  To: Roland Dreier
  Cc: openib-general, Christoph Raisch, Hoang-Nam Nguyen, Marcus Eder,
	linux-kernel, linuxppc-dev

Hello Roland,

Roland Dreier wrote:
> It seems that you are deferring completion event dispatch into threads
> spread across all the CPUs.  This seems like a very strange thing to
> me -- you are adding latency and possibly causing cacheline pingpong.
> 
> It may help throughput in some cases to spread the work across
> multiple CPUs but it seems strange to me to do this in the low-level
> driver.  My intuition would be that it would be better to do this in
> the higher levels, and leave open the possibility for protocols that
> want the lowest possible latency to be called directly from the
> interrupt handler.

We've implemented this "spread CQ callbacks across multiple CPUs"
functionality to get better throughput on a SMP system, as you have
seen.

Originaly, we had the same idea as you mentioned, that it would be better
to do this in the higher levels. The point is that we can't see so far
any simple posibility how this can done in the OpenIB stack, the TCP/IP
network layer or somewhere in the Linux kernel.

For example:
For IPoIB we get the best throughput when we do the CQ callbacks on
different CPUs and not to stay on the same CPU.

In other papers and slides (see [1]) you can see similar approaches.

I think such one implementation or functionality could require more
or less a non-trivial changes. This could be also releated to other
I/O traffic.

[1]:  Speeding up Networking, Van Jacobson and Bob Felderman,
       http://www.lemis.com/grog/Documentation/vj/lca06vj.pdf

Regards,
	Heiko


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [openib-general] [PATCH 07/16] ehca: interrupt handling routines
  2006-05-05 13:05   ` Heiko J Schick
@ 2006-05-05 14:49     ` Roland Dreier
  2006-05-09 12:35       ` Heiko J Schick
  0 siblings, 1 reply; 15+ messages in thread
From: Roland Dreier @ 2006-05-05 14:49 UTC (permalink / raw)
  To: Heiko J Schick
  Cc: linux-kernel, openib-general, linuxppc-dev, Christoph Raisch,
	Hoang-Nam Nguyen, Marcus Eder

    Heiko> Originaly, we had the same idea as you mentioned, that it
    Heiko> would be better to do this in the higher levels. The point
    Heiko> is that we can't see so far any simple posibility how this
    Heiko> can done in the OpenIB stack, the TCP/IP network layer or
    Heiko> somewhere in the Linux kernel.

    Heiko> For example: For IPoIB we get the best throughput when we
    Heiko> do the CQ callbacks on different CPUs and not to stay on
    Heiko> the same CPU.

So why not do it in IPoIB then?  This approach is not optimal
globally.  For example, uverbs event dispatch is just going to queue
an event and wake up the process waiting for events, and doing this on
some random CPU not related to the where the process will run is
clearly the worst possible way to dispatch the event.

    Heiko> In other papers and slides (see [1]) you can see similar
    Heiko> approaches.

    Heiko> [1]: Speeding up Networking, Van Jacobson and Bob
    Heiko> Felderman,
    Heiko> http://www.lemis.com/grog/Documentation/vj/lca06vj.pdf

I think you've misunderstood this paper.  It's about maximizing CPU
locality and pushing processing directly into the consumer.  In the
context of slide 9, what you've done is sort of like adding another
control loop inside the kernel, since you dispatch from interrupt
handler to driver thread to final consumer.  So I would argue that
your approach is exactly the opposite of what VJ is advocating.

 - R.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [openib-general] [PATCH 07/16] ehca: interrupt handling routines
  2006-05-05 14:49     ` Roland Dreier
@ 2006-05-09 12:35       ` Heiko J Schick
  2006-05-09 16:23         ` Roland Dreier
  0 siblings, 1 reply; 15+ messages in thread
From: Heiko J Schick @ 2006-05-09 12:35 UTC (permalink / raw)
  To: Roland Dreier
  Cc: linux-kernel, openib-general, linuxppc-dev, Christoph Raisch,
	Hoang-Nam Nguyen, Marcus Eder

Roland Dreier wrote:
>     Heiko> Originaly, we had the same idea as you mentioned, that it
>     Heiko> would be better to do this in the higher levels. The point
>     Heiko> is that we can't see so far any simple posibility how this
>     Heiko> can done in the OpenIB stack, the TCP/IP network layer or
>     Heiko> somewhere in the Linux kernel.
> 
>     Heiko> For example: For IPoIB we get the best throughput when we
>     Heiko> do the CQ callbacks on different CPUs and not to stay on
>     Heiko> the same CPU.
> 
> So why not do it in IPoIB then?  This approach is not optimal
> globally.  For example, uverbs event dispatch is just going to queue
> an event and wake up the process waiting for events, and doing this on
> some random CPU not related to the where the process will run is
> clearly the worst possible way to dispatch the event.

Yes, I agree. It would not be an optimal solution, because other upper
level protocols (e.g. SDP, SRP, etc.) or userspace verbs would not be
affected by this changes. Nevertheless, how can an improved "scaling"
or "SMP" version of IPoIB look like. How could it be implemented?

>     Heiko> In other papers and slides (see [1]) you can see similar
>     Heiko> approaches.
> 
>     Heiko> [1]: Speeding up Networking, Van Jacobson and Bob
>     Heiko> Felderman,
>     Heiko> http://www.lemis.com/grog/Documentation/vj/lca06vj.pdf

> I think you've misunderstood this paper.  It's about maximizing CPU
> locality and pushing processing directly into the consumer.  In the
> context of slide 9, what you've done is sort of like adding another
> control loop inside the kernel, since you dispatch from interrupt
> handler to driver thread to final consumer.  So I would argue that
> your approach is exactly the opposite of what VJ is advocating.

Sorry, my idea was not to use the *.pdf file how it should be
implemented. I only wanted to show that other people are also thinking
about how TCP/IP performance could be increased and where the bottlenecks
(e.g. SOFTIRQs) are. :)

Regards,
	Heiko

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [openib-general] [PATCH 07/16] ehca: interrupt handling routines
       [not found] <4450A196.2050901@de.ibm.com>
  2006-05-04 21:29 ` [openib-general] [PATCH 07/16] ehca: interrupt handling routines Roland Dreier
@ 2006-05-09 13:15 ` Christoph Hellwig
  1 sibling, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2006-05-09 13:15 UTC (permalink / raw)
  To: Heiko J Schick
  Cc: openib-general, Christoph Raisch, Hoang-Nam Nguyen, Marcus Eder,
	linux-kernel, linuxppc-dev

> +#include <linux/interrupt.h>
> +#include <asm/atomic.h>
> +#include <asm/types.h>

Please don't use <asm/types.h> directly ever.  Always include
<linux/types.h>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [openib-general] [PATCH 07/16] ehca: interrupt handling routines
  2006-05-09 12:35       ` Heiko J Schick
@ 2006-05-09 16:23         ` Roland Dreier
  2006-05-09 16:49           ` Michael S. Tsirkin
  2006-05-09 23:35           ` [openib-general] " Segher Boessenkool
  0 siblings, 2 replies; 15+ messages in thread
From: Roland Dreier @ 2006-05-09 16:23 UTC (permalink / raw)
  To: Heiko J Schick
  Cc: linux-kernel, openib-general, linuxppc-dev, Christoph Raisch,
	Hoang-Nam Nguyen, Marcus Eder

    Heiko> Yes, I agree. It would not be an optimal solution, because
    Heiko> other upper level protocols (e.g. SDP, SRP, etc.) or
    Heiko> userspace verbs would not be affected by this
    Heiko> changes. Nevertheless, how can an improved "scaling" or
    Heiko> "SMP" version of IPoIB look like. How could it be
    Heiko> implemented?

The trivial way to do it would be to use the same idea as the current
ehca driver: just create a thread for receive CQ events and a thread
for send CQ events, and defer CQ polling into those two threads.

Something even better may be possible by specializing to IPoIB of course.

 - R.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 07/16] ehca: interrupt handling routines
  2006-05-09 16:23         ` Roland Dreier
@ 2006-05-09 16:49           ` Michael S. Tsirkin
       [not found]             ` <OF22D08323.20D303C1-ON87257169.0063C980-88257169.006A41AB@us.ibm.com>
  2006-05-09 18:57             ` [openib-general] " Heiko J Schick
  2006-05-09 23:35           ` [openib-general] " Segher Boessenkool
  1 sibling, 2 replies; 15+ messages in thread
From: Michael S. Tsirkin @ 2006-05-09 16:49 UTC (permalink / raw)
  To: Roland Dreier
  Cc: Heiko J Schick, linux-kernel, openib-general, linuxppc-dev,
	Christoph Raisch, Hoang-Nam Nguyen, Marcus Eder

Quoting r. Roland Dreier <rdreier@cisco.com>:
> The trivial way to do it would be to use the same idea as the current
> ehca driver: just create a thread for receive CQ events and a thread
> for send CQ events, and defer CQ polling into those two threads.

For RX, isn't this basically what NAPI is doing?
Only NAPI seems better, avoiding interrupts completely and avoiding latency hit
by only getting triggered on high load ...

-- 
MST

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [openib-general] Re: [PATCH 07/16] ehca: interrupt handling routines
       [not found]             ` <OF22D08323.20D303C1-ON87257169.0063C980-88257169.006A41AB@us.ibm.com>
@ 2006-05-09 18:36               ` Roland Dreier
  2006-05-09 18:44               ` Michael S. Tsirkin
  1 sibling, 0 replies; 15+ messages in thread
From: Roland Dreier @ 2006-05-09 18:36 UTC (permalink / raw)
  To: Shirley Ma
  Cc: Michael S. Tsirkin, linux-kernel, openib-general, linuxppc-dev,
	Christoph Raisch, Hoang-Nam Nguyen, Marcus Eder,
	openib-general-bounces

    Shirley> I have done some patch like that on top of splitting
    Shirley> CQ. The problem I found that hardware interrupt favors
    Shirley> one CPU. Most of the time these two threads are running
    Shirley> on the same cpu according to my debug output. You can
    Shirley> easily find out by cat /proc/interrupts and
    Shirley> /proc/irq/XXX/smp_affinity.  ehca has distributed
    Shirley> interrupts evenly on SMP, so it gets the benefits of two
    Shirley> threads, and gains much better throughputs.

Yes, an interrupt will likely be delivered to one CPU.

But there's no reason why the two threads can't be pinned to different
CPUs or given exclusive CPU masks, exactly the same way that ehca
implements it.

 - R.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [openib-general] Re: [PATCH 07/16] ehca: interrupt handling routines
       [not found]             ` <OF22D08323.20D303C1-ON87257169.0063C980-88257169.006A41AB@us.ibm.com>
  2006-05-09 18:36               ` [openib-general] " Roland Dreier
@ 2006-05-09 18:44               ` Michael S. Tsirkin
       [not found]                 ` <OF9332FF11.38007290-ON87257169.00673F71-88257169.006C7F6E@us.ibm.com>
  1 sibling, 1 reply; 15+ messages in thread
From: Michael S. Tsirkin @ 2006-05-09 18:44 UTC (permalink / raw)
  To: Shirley Ma
  Cc: Hoang-Nam Nguyen, linux-kernel, linuxppc-dev, Marcus Eder,
	openib-general, openib-general-bounces, Christoph Raisch,
	Roland Dreier

Quoting r. Shirley Ma <xma@us.ibm.com>:
> According to some results from different resouces, NAPI only gives 3%-10% performance improvement on single CQ.

When you say performance you mean bandwidth.
But I think it should improve the CPU utilization on RX side significantly.
If it does, that an important metric as well.

> I am trying a simple NAPI patch on splitting CQ now to see how much performance there.

What are you using for a benchmark?

-- 
MST

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Re: [PATCH 07/16] ehca: interrupt handling routines
       [not found]                 ` <OF9332FF11.38007290-ON87257169.00673F71-88257169.006C7F6E@us.ibm.com>
@ 2006-05-09 18:55                   ` Michael S. Tsirkin
  0 siblings, 0 replies; 15+ messages in thread
From: Michael S. Tsirkin @ 2006-05-09 18:55 UTC (permalink / raw)
  To: Shirley Ma
  Cc: Hoang-Nam Nguyen, linux-kernel, linuxppc-dev, Marcus Eder,
	openib-general, openib-general-bounces, Christoph Raisch,
	Roland Dreier

Quoting r. Shirley Ma <xma@us.ibm.com>:
> No, CPU utilization wasn't reduced. When you use single CQ, NAPI polls on both RX/TX.

I think NAPI's point is to reduce the interrupt rate.
Wouldn't this reduce CPU load?

> netperf, iperf, mpstat, netpipe, oprofiling, what's your suggestion?

netperf has -C which gives CPU load, which is handy.
Running vmstat in another window also works reasoably well.

-- 
MST

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [openib-general] Re: [PATCH 07/16] ehca: interrupt handling routines
  2006-05-09 16:49           ` Michael S. Tsirkin
       [not found]             ` <OF22D08323.20D303C1-ON87257169.0063C980-88257169.006A41AB@us.ibm.com>
@ 2006-05-09 18:57             ` Heiko J Schick
  2006-05-09 19:04               ` Stephen Hemminger
       [not found]               ` <OF6CAB9865.804CAFBB-ON87257169.006C3DBC-88257169.00718277@us.ibm.com>
  1 sibling, 2 replies; 15+ messages in thread
From: Heiko J Schick @ 2006-05-09 18:57 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Roland Dreier, linux-kernel, openib-general, linuxppc-dev,
	Christoph Raisch, Hoang-Nam Nguyen, Marcus Eder

On 09.05.2006, at 18:49, Michael S. Tsirkin wrote:

>> The trivial way to do it would be to use the same idea as the current
>> ehca driver: just create a thread for receive CQ events and a thread
>> for send CQ events, and defer CQ polling into those two threads.
>
> For RX, isn't this basically what NAPI is doing?
> Only NAPI seems better, avoiding interrupts completely and avoiding  
> latency hit
> by only getting triggered on high load ...

Does NAPI schedules CQ callbacks to different CPUs or stays the callback
(handling of data, etc.) on the same CPU where the interrupt came in?

Regards,
	Heiko

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [openib-general] Re: [PATCH 07/16] ehca: interrupt handling routines
  2006-05-09 18:57             ` [openib-general] " Heiko J Schick
@ 2006-05-09 19:04               ` Stephen Hemminger
       [not found]               ` <OF6CAB9865.804CAFBB-ON87257169.006C3DBC-88257169.00718277@us.ibm.com>
  1 sibling, 0 replies; 15+ messages in thread
From: Stephen Hemminger @ 2006-05-09 19:04 UTC (permalink / raw)
  To: linux-kernel

On Tue, 9 May 2006 20:57:01 +0200
Heiko J Schick <info@schihei.de> wrote:

> On 09.05.2006, at 18:49, Michael S. Tsirkin wrote:
> 
> >> The trivial way to do it would be to use the same idea as the current
> >> ehca driver: just create a thread for receive CQ events and a thread
> >> for send CQ events, and defer CQ polling into those two threads.
> >
> > For RX, isn't this basically what NAPI is doing?
> > Only NAPI seems better, avoiding interrupts completely and avoiding  
> > latency hit
> > by only getting triggered on high load ...
> 
> Does NAPI schedules CQ callbacks to different CPUs or stays the callback
> (handling of data, etc.) on the same CPU where the interrupt came in?
> 

NAPI runs callback on same cpu that called netif_rx_schedule. 
This has benefit of cache location and reduces locking overhead.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [openib-general] Re: [PATCH 07/16] ehca: interrupt handling routines
       [not found]               ` <OF6CAB9865.804CAFBB-ON87257169.006C3DBC-88257169.00718277@us.ibm.com>
@ 2006-05-09 20:20                 ` Michael S. Tsirkin
       [not found]                   ` <OFD1053717.15B4F49A-ON87257169.007531E1-88257169.007AD29E@us.ibm.com>
  0 siblings, 1 reply; 15+ messages in thread
From: Michael S. Tsirkin @ 2006-05-09 20:20 UTC (permalink / raw)
  To: Shirley Ma
  Cc: Heiko J Schick, Hoang-Nam Nguyen, linux-kernel, linuxppc-dev,
	Marcus Eder, openib-general, openib-general-bounces,
	Christoph Raisch, Roland Dreier

Quoting r. Shirley Ma <xma@us.ibm.com>:
> My understanding is NAPI handle interrutps CQ callbacks on the same CPU.

My understanding is NAPI disables interrupts under high RX load. No?

-- 
MST

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [openib-general] [PATCH 07/16] ehca: interrupt handling routines
  2006-05-09 16:23         ` Roland Dreier
  2006-05-09 16:49           ` Michael S. Tsirkin
@ 2006-05-09 23:35           ` Segher Boessenkool
  1 sibling, 0 replies; 15+ messages in thread
From: Segher Boessenkool @ 2006-05-09 23:35 UTC (permalink / raw)
  To: Roland Dreier
  Cc: Heiko J Schick, linux-kernel, openib-general, linuxppc-dev,
	Christoph Raisch, Hoang-Nam Nguyen, Marcus Eder

>     Heiko> Yes, I agree. It would not be an optimal solution, because
>     Heiko> other upper level protocols (e.g. SDP, SRP, etc.) or
>     Heiko> userspace verbs would not be affected by this
>     Heiko> changes. Nevertheless, how can an improved "scaling" or
>     Heiko> "SMP" version of IPoIB look like. How could it be
>     Heiko> implemented?
>
> The trivial way to do it would be to use the same idea as the current
> ehca driver: just create a thread for receive CQ events and a thread
> for send CQ events, and defer CQ polling into those two threads.
>
> Something even better may be possible by specializing to IPoIB of  
> course.

The hardware IRQ should go to some CPU close to the hardware itself.   
The
softirq (or whatever else) should go to the same CPU that is handling  
the
user-level task for that message.  Or a CPU close to it, at least.


Segher


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [openib-general] Re: [PATCH 07/16] ehca: interrupt handling routines
       [not found]                   ` <OFD1053717.15B4F49A-ON87257169.007531E1-88257169.007AD29E@us.ibm.com>
@ 2006-05-10  5:33                     ` Michael S. Tsirkin
  0 siblings, 0 replies; 15+ messages in thread
From: Michael S. Tsirkin @ 2006-05-10  5:33 UTC (permalink / raw)
  To: Shirley Ma
  Cc: Hoang-Nam Nguyen, Heiko J Schick, linux-kernel, linuxppc-dev,
	Marcus Eder, openib-general, openib-general-bounces,
	Christoph Raisch, Roland Dreier

Quoting r. Shirley Ma <xma@us.ibm.com>:
> Subject: Re: [openib-general] Re: [PATCH 07/16] ehca: interrupt handling?routines
> 
> 
> "Michael S. Tsirkin" <mst@mellanox.co.il> wrote on 05/09/2006 01:20:41 PM:
> 
> > Quoting r. Shirley Ma <xma@us.ibm.com>:
> > > My understanding is NAPI handle interrutps CQ callbacks on the same CPU.
> >
> > My understanding is NAPI disables interrupts under high RX load. No?
> 
> Yes, NAPI disables the interrupts based on the weight. In IPoIB case, it doesn't
> send out the next completion notification under heavy loading.
> The similiar CQ polling is still in NAPI on same CPU, but it's not a callback
> anymore.

Sorry, same CPU as what?

-- 
MST

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2006-05-10  5:33 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <4450A196.2050901@de.ibm.com>
2006-05-04 21:29 ` [openib-general] [PATCH 07/16] ehca: interrupt handling routines Roland Dreier
2006-05-05 13:05   ` Heiko J Schick
2006-05-05 14:49     ` Roland Dreier
2006-05-09 12:35       ` Heiko J Schick
2006-05-09 16:23         ` Roland Dreier
2006-05-09 16:49           ` Michael S. Tsirkin
     [not found]             ` <OF22D08323.20D303C1-ON87257169.0063C980-88257169.006A41AB@us.ibm.com>
2006-05-09 18:36               ` [openib-general] " Roland Dreier
2006-05-09 18:44               ` Michael S. Tsirkin
     [not found]                 ` <OF9332FF11.38007290-ON87257169.00673F71-88257169.006C7F6E@us.ibm.com>
2006-05-09 18:55                   ` Michael S. Tsirkin
2006-05-09 18:57             ` [openib-general] " Heiko J Schick
2006-05-09 19:04               ` Stephen Hemminger
     [not found]               ` <OF6CAB9865.804CAFBB-ON87257169.006C3DBC-88257169.00718277@us.ibm.com>
2006-05-09 20:20                 ` Michael S. Tsirkin
     [not found]                   ` <OFD1053717.15B4F49A-ON87257169.007531E1-88257169.007AD29E@us.ibm.com>
2006-05-10  5:33                     ` Michael S. Tsirkin
2006-05-09 23:35           ` [openib-general] " Segher Boessenkool
2006-05-09 13:15 ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).