linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Matt Porter <mporter@kernel.crashing.org>
To: Avni Hillel-R58467 <hillel.avni@freescale.com>
Cc: "'linuxppc-embedded@ozlabs.org'" <linuxppc-embedded@ozlabs.org>
Subject: Re: [PATCH][5/5] RapidIO support: net driver over messaging
Date: Wed, 27 Jul 2005 06:56:26 -0700	[thread overview]
Message-ID: <20050727065626.D32625@cox.net> (raw)
In-Reply-To: <D39C8B3D64A0D511A61E00D0B7828EEA0C0764FD@zil05exm01.ea.freescale.net>; from hillel.avni@freescale.com on Wed, Jul 27, 2005 at 01:36:15PM +0300

On Wed, Jul 27, 2005 at 01:36:15PM +0300, Avni Hillel-R58467 wrote:
> Hi Matt,
> 
> Two questions:
> 
> A. How can a node (not the host) know who is in the rionet to broadcast to them? 

All nodes in the rionet are flagged in the active peer list.  There is an
active peer list kept for the rionet instance on _each node_. There is
no distinction as to whether a node was the winning enumerating host or
is just another processing element that found devices in the system via
passing discovery. The only inherently significant about a "host" in
RapidIO is that it participates in enumeration. After the system is
enumerated it's no longer special (unless your particular system
application designates the hosts have some special network-wide
ownership of resources or something). 

Broadcast works the same way on all nodes by sending the same packet
to every node in the active peer list.

> B. How do you emulate broadcasting to all the mailboxes, in multi mbox systems? Is this done by the node getting the broadcast in MB 0 and forwarding it to the other MBs?

rionet doesn't handle multiple mailboxes yet.

However, it becomes tricky because we don't want to bridge separate
Ethernet networks by policy in the driver.  If two mailboxes are
part of separate rio device trees, then it doesn't make sense to send
broadcasts out on both mailboxes. It needs some thought and also
docs on how new silicon might be implementing queues in new mailboxes.

With RIO, there's so much left to be implementation specific in the
silicon. It did not make sense to make assumptions and try to
handle multiple mailboxes. If you have a multi mbox system it would
help to have a description so we can work to support it.

-Matt

  reply	other threads:[~2005-07-27 13:56 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-07-27 10:36 [PATCH][5/5] RapidIO support: net driver over messaging Avni Hillel-R58467
2005-07-27 13:56 ` Matt Porter [this message]
  -- strict thread matches above, loose matches on Subject: below --
2005-06-02 21:03 [PATCH][1/5] RapidIO support: core Matt Porter
2005-06-02 21:12 ` [PATCH][2/5] RapidIO support: core includes Matt Porter
2005-06-02 21:19   ` [PATCH][3/5] RapidIO support: enumeration Matt Porter
2005-06-02 21:25     ` [PATCH][4/5] RapidIO support: ppc32 Matt Porter
2005-06-02 21:34       ` [PATCH][5/5] RapidIO support: net driver over messaging Matt Porter
2005-06-02 22:05         ` Stephen Hemminger
2005-06-03 22:43           ` Matt Porter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20050727065626.D32625@cox.net \
    --to=mporter@kernel.crashing.org \
    --cc=hillel.avni@freescale.com \
    --cc=linuxppc-embedded@ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).