* PowerPC Beowulf Who?
@ 1999-02-22 17:35 Campbell, Marc
1999-02-23 6:14 ` Robert G. Werner
1999-03-02 11:00 ` Martin Konold
0 siblings, 2 replies; 14+ messages in thread
From: Campbell, Marc @ 1999-02-22 17:35 UTC (permalink / raw)
To: extreme-linux, beowulf, linuxppc-dev, linuxppc-user
Any suggestions on who to contact for PowerPC & PowerPC/AltiVec Beowulf development, integration, sales, and/or support?
Marc Campbell
Northrop Grumman
Technology Development
High Performance Computing, Group Lead
Melbourne, FL
campbma1@mail.northgrum.com
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-22 17:35 PowerPC Beowulf Who? Campbell, Marc
@ 1999-02-23 6:14 ` Robert G. Werner
1999-02-23 22:13 ` Campbell, Marc
1999-03-02 11:00 ` Martin Konold
1 sibling, 1 reply; 14+ messages in thread
From: Robert G. Werner @ 1999-02-23 6:14 UTC (permalink / raw)
To: Campbell, Marc; +Cc: linuxppc-dev
Are the issues that different for PPC than for the other architectures that you
need to special attention on this matter? My understanding was that the
clustering techniques used for Beowulf were based on standard libs (PVM,
IIRC). Thus the issues seem like they should be pretty much the same as on any
other HW baring Networking HW questions.
Am I way off base here?
Robert G. Werner
rwerner@lx1.microbsys.com
Impeach Conggress!!
To believe your own thought, to believe that what is true for
you in your private heart is true for all men -- that is genius.
-- Ralph Waldo Emerson
On Mon, 22 Feb 1999, Campbell, Marc wrote:
>
> Any suggestions on who to contact for PowerPC & PowerPC/AltiVec Beowulf development, integration, sales, and/or support?
>
> Marc Campbell
> Northrop Grumman
> Technology Development
> High Performance Computing, Group Lead
> Melbourne, FL
>
> campbma1@mail.northgrum.com
>
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-23 6:14 ` Robert G. Werner
@ 1999-02-23 22:13 ` Campbell, Marc
1999-02-23 23:22 ` Robert G. Werner
0 siblings, 1 reply; 14+ messages in thread
From: Campbell, Marc @ 1999-02-23 22:13 UTC (permalink / raw)
To: Robert G. Werner; +Cc: linuxppc-dev
There are several issues. Some PPC specific and some not.
General issues would include the ability to procure turn key system and software revision support.
PPC specific issues start with the assurance that the necessary libs (e.g. MPI) have been checked out on PPC hardware clusters.
Another PPC issue is cluster communication. In some cases standard ethernet is not enough cluster bandwidth. In what case does FireWire make sense in a PPC Beowulf (Linux) cluster? Is Myrinet (www.myri.com) a more appropriate clustering communication system? If Myrinet, then where are the LinuxPPC or YDL drivers? Myrinet lists Alpha Linux and x86 Linux but NOT PPC Linux (http://www.myri.com:80/GM/).
Who is taking initiative in the PowerPC/AltiVec Beowulf (Linux clustering) area?
-- Marc
Marc Campbell
Northrop Grumman
Technology Development
High Performance Computing, Group Lead
Melbourne, FL
campbma1@mail.northgrum.com
"Robert G. Werner" wrote:
> Are the issues that different for PPC than for the other architectures that you
> need to special attention on this matter? My understanding was that the
> clustering techniques used for Beowulf were based on standard libs (PVM,
> IIRC). Thus the issues seem like they should be pretty much the same as on any
> other HW baring Networking HW questions.
> Am I way off base here?
>
> Robert G. Werner
> rwerner@lx1.microbsys.com
> Impeach Conggress!!
>
> To believe your own thought, to believe that what is true for
> you in your private heart is true for all men -- that is genius.
> -- Ralph Waldo Emerson
>
> On Mon, 22 Feb 1999, Campbell, Marc wrote:
>
> >
> > Any suggestions on who to contact for PowerPC & PowerPC/AltiVec Beowulf development, integration, sales, and/or support?
> >
> > Marc Campbell
> > Northrop Grumman
> > Technology Development
> > High Performance Computing, Group Lead
> > Melbourne, FL
> >
> > campbma1@mail.northgrum.com
> >
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-23 22:13 ` Campbell, Marc
@ 1999-02-23 23:22 ` Robert G. Werner
1999-02-25 4:06 ` Troy Benjegerdes
0 siblings, 1 reply; 14+ messages in thread
From: Robert G. Werner @ 1999-02-23 23:22 UTC (permalink / raw)
To: Campbell, Marc; +Cc: linuxppc-dev
I'm pretty new to Beowulf myself but I haven't ran across any groups
specifically oriented toward supporting PPC.
As in regards to the General Beowulf:
Turnkey systems for Beowulf are still a new phenomenon. Most of the famous
clusters I've ever heard of were hand built.
I've heard of some companies (don't remember names) that will sell turnkey
systems but I'm assuming those are IA32 or Aplha.
What exactly do you mean by SW revision support?
On the PPC:
IIRC Firewire isn't even working under LinuxPPC as a regular interface yet.
Looking into using it for networking in a Beowulf cluster is a bit
premature IMHO. Is there even any commercial HW for this?
I really do have a hunch that once you can get Firewire, say, to work with
Linux on PPC, then it will be trivial to make it work with the clustering
libs.
Check out the Linux Paralale-HOWTO
(at http://metalab.unc.edu/mdw/HOWTO/Parallel-Processing-HOWTO-2.html)
for some information on who is developing the non-ethernet networking
technologies for clustering. Perdue is big in this field, IIRC. They have
a nice technique using the paralale port for networking even(PAPERS).
As far as I know, all of these are great questions but Beowulf clustering
technology is still experimental enough that many of your questions haven't
been completely answered for IA32 and Alpha let alone PPC.
Again, I'm not an expert. Just an enthusiastic beginer like yourself.
BTW, what applications are you looking to use your cluster for? Do you have a
specific task in mind or are you just looking to make sure that LinuxPPC can do
clusters too?
Robert G. Werner
rwerner@lx1.microbsys.com
Impeach Conggress!!
"Just think, with VLSI we can have 100 ENIACS on a chip!"
-- Alan Perlis
On Tue, 23 Feb 1999, Campbell, Marc wrote:
> There are several issues. Some PPC specific and some not.
>
> General issues would include the ability to procure turn key system and software revision support.
>
> PPC specific issues start with the assurance that the necessary libs (e.g. MPI) have been checked out on PPC hardware clusters.
>
> Another PPC issue is cluster communication. In some cases standard ethernet is not enough cluster bandwidth. In what case does FireWire make sense in a PPC Beowulf (Linux) cluster? Is Myrinet (www.myri.com) a more appropriate clustering communication
system? If Myrinet, then where are the LinuxPPC or YDL drivers? Myrinet lists Alpha Linux and x86 Linux but NOT PPC Linux (http://www.myri.com:80/GM/).
>
> Who is taking initiative in the PowerPC/AltiVec Beowulf (Linux clustering) area?
>
> -- Marc
>
> Marc Campbell
> Northrop Grumman
> Technology Development
> High Performance Computing, Group Lead
> Melbourne, FL
>
> campbma1@mail.northgrum.com
>
> "Robert G. Werner" wrote:
>
> > Are the issues that different for PPC than for the other architectures that you
> > need to special attention on this matter? My understanding was that the
> > clustering techniques used for Beowulf were based on standard libs (PVM,
> > IIRC). Thus the issues seem like they should be pretty much the same as on any
> > other HW baring Networking HW questions.
> > Am I way off base here?
> >
> > Robert G. Werner
> > rwerner@lx1.microbsys.com
> > Impeach Conggress!!
> >
> > To believe your own thought, to believe that what is true for
> > you in your private heart is true for all men -- that is genius.
> > -- Ralph Waldo Emerson
> >
> > On Mon, 22 Feb 1999, Campbell, Marc wrote:
> >
> > >
> > > Any suggestions on who to contact for PowerPC & PowerPC/AltiVec Beowulf development, integration, sales, and/or support?
> > >
> > > Marc Campbell
> > > Northrop Grumman
> > > Technology Development
> > > High Performance Computing, Group Lead
> > > Melbourne, FL
> > >
> > > campbma1@mail.northgrum.com
> > >
>
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-23 23:22 ` Robert G. Werner
@ 1999-02-25 4:06 ` Troy Benjegerdes
1999-02-25 10:28 ` sean o'malley
0 siblings, 1 reply; 14+ messages in thread
From: Troy Benjegerdes @ 1999-02-25 4:06 UTC (permalink / raw)
To: Robert G. Werner; +Cc: Campbell, Marc, linuxppc-dev
On Tue, 23 Feb 1999, Robert G. Werner wrote:
>
> I'm pretty new to Beowulf myself but I haven't ran across any groups
> specifically oriented toward supporting PPC.
>
> As in regards to the General Beowulf:
> Turnkey systems for Beowulf are still a new phenomenon. Most of the famous
> clusters I've ever heard of were hand built.
> I've heard of some companies (don't remember names) that will sell turnkey
> systems but I'm assuming those are IA32 or Aplha.
> What exactly do you mean by SW revision support?
>
> On the PPC:
> IIRC Firewire isn't even working under LinuxPPC as a regular interface yet.
> Looking into using it for networking in a Beowulf cluster is a bit
> premature IMHO. Is there even any commercial HW for this?
> I really do have a hunch that once you can get Firewire, say, to work with
> Linux on PPC, then it will be trivial to make it work with the clustering
> libs.
I wouldn't hold out high hopes for firewire.. Fast ethernet and Gigabit
ethernet are both much more cost effective, and available now. One of the
things that makes fast ethernet work so well for Beowulf applications is
the price/performance one can get with fast ethernet switches. Gigabit
ethernet is quickly getting very affordable also.
I doubt firewire switches will be available any time in the near future,
not to mention TCP/IP support for firewire.
>
> Check out the Linux Paralale-HOWTO
> (at http://metalab.unc.edu/mdw/HOWTO/Parallel-Processing-HOWTO-2.html)
> for some information on who is developing the non-ethernet networking
> technologies for clustering. Perdue is big in this field, IIRC. They have
> a nice technique using the paralale port for networking even(PAPERS).
> As far as I know, all of these are great questions but Beowulf clustering
> technology is still experimental enough that many of your questions haven't
> been completely answered for IA32 and Alpha let alone PPC.
> Again, I'm not an expert. Just an enthusiastic beginer like yourself.
>
Also see http://www.scl.ameslab.gov/Projects/ClusterCookbook/
Actually, I would say Beowulf applications are quite useable.. I work at
the Scalable Computing Lab (the above url), and we regularly users running
scientific applications on a 64 node Pentium Pro cluster.
> BTW, what applications are you looking to use your cluster for? Do you have a
> specific task in mind or are you just looking to make sure that LinuxPPC can do
> clusters too?
>
> Robert G. Werner
> rwerner@lx1.microbsys.com
> Impeach Conggress!!
>
> "Just think, with VLSI we can have 100 ENIACS on a chip!"
> -- Alan Perlis
>
> On Tue, 23 Feb 1999, Campbell, Marc wrote:
>
> > There are several issues. Some PPC specific and some not.
> >
> > General issues would include the ability to procure turn key system and software revision support.
> >
> > PPC specific issues start with the assurance that the necessary libs (e.g. MPI) have been checked out on PPC hardware clusters.
> >
> > Another PPC issue is cluster communication. In some cases standard ethernet is not enough cluster bandwidth. In what case does FireWire make sense in a PPC Beowulf (Linux) cluster? Is Myrinet (www.myri.com) a more appropriate clustering communication
> system? If Myrinet, then where are the LinuxPPC or YDL drivers? Myrinet lists Alpha Linux and x86 Linux but NOT PPC Linux (http://www.myri.com:80/GM/).
> >
> > Who is taking initiative in the PowerPC/AltiVec Beowulf (Linux clustering) area?
It seems the Linux clustering area is a rather small (but quickly growing)
niche, and the PPC niche even smaller. To compound the problem, PPC
hardware either isn't supported very well under Linux (the new Apple
G3's), or is expensive (Motorola's MTX boards). This isn't to say the new
G3's won't get support eventually, but by the time it's available, the
hardware is old. Someone building a cluster wants it to work ASAP, and
doesn't want year-old hardware.
I personally think the Motorola MTX boards would be terrific for clusters
since one could rack mount the boards on an approximately 2-3 inch
spaceing since SCSI and fast ethernet are built on the board. Then there's
the bonus that they only draw 30 watts for a dual 604e, cutting down on
cooling and power bills. The only difficulty is you're going to pay about
twice what a similiar Pentium II cluster will cost. (of course, rack
mounting PII's isn't exactly cheap either)
--------------------------------------------------------------------------
| Troy Benjegerdes | troy@microux.com | hozer@drgw.net |
| Unix is user friendly... You just have to be friendly to it first. |
| This message composed with 100% free software. http://www.gnu.org |
--------------------------------------------------------------------------
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-25 4:06 ` Troy Benjegerdes
@ 1999-02-25 10:28 ` sean o'malley
1999-02-25 19:31 ` Robert G. Werner
1999-02-26 2:56 ` Douglas Godfrey
0 siblings, 2 replies; 14+ messages in thread
From: sean o'malley @ 1999-02-25 10:28 UTC (permalink / raw)
To: linuxppc-dev
Brief:
Firewire is _begging_ for clustering/Assymetrical processing (think about a
block of networked RAM). Its almost a waste not to use it for that.
Second its begging to _not_ to use TCP/IP
Third I dont think it needs routers/switches and it definately doesnt need
as much work to install a network vs fibre.
Fourth if someone wants to hack at a firewire driver try (I dont have a
firewire)
http://www.edu.uni-klu.ac.at/~epirker/ieee1394.html
Fifth, I really need to read up on this stuff =)
Sixth, Compaq is selling Linux clusters with the new Alphas. (or so i read
somewhere)
Sean
>On Tue, 23 Feb 1999, Robert G. Werner wrote:
>
>>
>> I'm pretty new to Beowulf myself but I haven't ran across any groups
>> specifically oriented toward supporting PPC.
>>
>> As in regards to the General Beowulf:
>> Turnkey systems for Beowulf are still a new phenomenon. Most of the famous
>> clusters I've ever heard of were hand built.
>> I've heard of some companies (don't remember names) that will sell turnkey
>> systems but I'm assuming those are IA32 or Aplha.
>> What exactly do you mean by SW revision support?
>>
>> On the PPC:
>> IIRC Firewire isn't even working under LinuxPPC as a regular interface
>>yet.
>> Looking into using it for networking in a Beowulf cluster is a bit
>> premature IMHO. Is there even any commercial HW for this?
>> I really do have a hunch that once you can get Firewire, say, to work
>>with
>> Linux on PPC, then it will be trivial to make it work with the clustering
>> libs.
>
>I wouldn't hold out high hopes for firewire.. Fast ethernet and Gigabit
>ethernet are both much more cost effective, and available now. One of the
>things that makes fast ethernet work so well for Beowulf applications is
>the price/performance one can get with fast ethernet switches. Gigabit
>ethernet is quickly getting very affordable also.
>
>I doubt firewire switches will be available any time in the near future,
>not to mention TCP/IP support for firewire.
>>
>> Check out the Linux Paralale-HOWTO
>> (at http://metalab.unc.edu/mdw/HOWTO/Parallel-Processing-HOWTO-2.html)
>> for some information on who is developing the non-ethernet networking
>> technologies for clustering. Perdue is big in this field, IIRC. They
>>have
>> a nice technique using the paralale port for networking even(PAPERS).
>> As far as I know, all of these are great questions but Beowulf clustering
>> technology is still experimental enough that many of your questions haven't
>> been completely answered for IA32 and Alpha let alone PPC.
>> Again, I'm not an expert. Just an enthusiastic beginer like yourself.
>>
>
>Also see http://www.scl.ameslab.gov/Projects/ClusterCookbook/
>
>Actually, I would say Beowulf applications are quite useable.. I work at
>the Scalable Computing Lab (the above url), and we regularly users running
>scientific applications on a 64 node Pentium Pro cluster.
>
>> BTW, what applications are you looking to use your cluster for? Do you
>>have a
>> specific task in mind or are you just looking to make sure that LinuxPPC
>>can do
>> clusters too?
>>
>> Robert G. Werner
>> rwerner@lx1.microbsys.com
>> Impeach Conggress!!
>>
>> "Just think, with VLSI we can have 100 ENIACS on a chip!"
>> -- Alan Perlis
>>
>> On Tue, 23 Feb 1999, Campbell, Marc wrote:
>>
>> > There are several issues. Some PPC specific and some not.
>> >
>> > General issues would include the ability to procure turn key system
>>and software revision support.
>> >
>> > PPC specific issues start with the assurance that the necessary libs
>>(e.g. MPI) have been checked out on PPC hardware clusters.
>> >
>> > Another PPC issue is cluster communication. In some cases standard
>>ethernet is not enough cluster bandwidth. In what case does FireWire
>>make sense in a PPC Beowulf (Linux) cluster? Is Myrinet (www.myri.com) a
>>more appropriate clustering communication
>> system? If Myrinet, then where are the LinuxPPC or YDL drivers?
>>Myrinet lists Alpha Linux and x86 Linux but NOT PPC Linux
>>(http://www.myri.com:80/GM/).
>> >
>> > Who is taking initiative in the PowerPC/AltiVec Beowulf (Linux
>>clustering) area?
>
>It seems the Linux clustering area is a rather small (but quickly growing)
>niche, and the PPC niche even smaller. To compound the problem, PPC
>hardware either isn't supported very well under Linux (the new Apple
>G3's), or is expensive (Motorola's MTX boards). This isn't to say the new
>G3's won't get support eventually, but by the time it's available, the
>hardware is old. Someone building a cluster wants it to work ASAP, and
>doesn't want year-old hardware.
>
>I personally think the Motorola MTX boards would be terrific for clusters
>since one could rack mount the boards on an approximately 2-3 inch
>spaceing since SCSI and fast ethernet are built on the board. Then there's
>the bonus that they only draw 30 watts for a dual 604e, cutting down on
>cooling and power bills. The only difficulty is you're going to pay about
>twice what a similiar Pentium II cluster will cost. (of course, rack
>mounting PII's isn't exactly cheap either)
>
>--------------------------------------------------------------------------
>| Troy Benjegerdes | troy@microux.com | hozer@drgw.net |
>| Unix is user friendly... You just have to be friendly to it first. |
>| This message composed with 100% free software. http://www.gnu.org |
>--------------------------------------------------------------------------
>
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-25 19:31 ` Robert G. Werner
@ 1999-02-25 17:05 ` sean o'malley
1999-02-25 22:30 ` Robert G. Werner
` (3 more replies)
0 siblings, 4 replies; 14+ messages in thread
From: sean o'malley @ 1999-02-25 17:05 UTC (permalink / raw)
To: Robert G. Werner; +Cc: linuxppc-dev
Okay, Im talking about a more sophisticated model than the basic clustering
scenario you proposed.
I think for your model maybe TCP/IP would be better.
I thinking something a bit more sophisticated and it may in fact it maybe
called something quite different =)
what about a virtual computer that runs on the network, but is non-existent
as an individual machine. It is made up of the entire network.. You can
submit jobs to it and it will process them with the available resources of
the entire network.
In this model with a firewire backbone, you basically use the network as
the machines bus you can drop harddrives on the network as well as stack of
ram (of which you dont need tcp/ip to utilize ie you give your harddrive an
ip number? *ponders*) The virtual machine will basically steal individual
cycles from machines to process its information. Not in big chunks but in
rather tiny chunks thus the worry about network overhead.
Basically its taking the SMP model and exploding it to a network.
No, it wont be as efficient as an SMP machine, nor as fast. But lets say
you work in an office building with 400 people and they each have their own
computer. You could get a lot of extra juice out of say your secretary's
machine while she is off on lunch or taking a phone call or while your
talking to her.
You also couldnt use TCP/IP for this model either because there is no
"real" machine its existence is all the parts of the network, not one
aspect of it. You dont really want to assign harddrives IP numbers nor
would you want to assign it to blocks of ram or scanners on the network
when in fact you dont need them..
And yes I realize it is probably more impractical at this stage than
functional, but it would be cool. =)
Sean
>I'm sorry, but I don't understand what you mean about Firewire begging for
>clustering. It is an interesting technology in concept but it hasn't been
>implemented for much of anything but some cameras and maybe a hard drive or
>two. I know we are no where near having firwire working under Linux.
>I think you will find that Firwire doesn't have a Price/performance ratio that
>is adnvantageious over 100mb Ethernet and later Gb ethernet. That technology
>has been implemented, is relatively mature and works under nearly all OSs.
>Let's not get confused here, paralale processing works best when each
>node can
>be given a chunk of the problem and associated data and then left alone.
>Every
>time the cluster has to hit the network for anything more than a small
>message,
>we lose performance relative to an SMP machine (internal data busses are
>always
>going to be faster than going out to the network). Huge bandwidth isn't
>really
>the biggest constraint with clusters. Breaking up the problem correctly is
>much more important.
>
>Just because you use Firewire doesn't mean you aren't going to use TCP/IP.
>That is up to the drivers and what your basic networking stack can support. I
>may be wrong on this but I haven't heard anything about Firewire requiring a
>different protocol. Seems like it would be a lot of work to make one up too,
>especially as we have a very nice TCP/IP stack in Linux.
>
>Robert G. Werner
>rwerner@lx1.microbsys.com
>Impeach Conggress!!
>
>If the girl you love moves in with another guy once, it's more than enough.
>Twice, it's much too much. Three times, it's the story of your life.
>
>On Thu, 25 Feb 1999, sean o'malley wrote:
>
>[snip]
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-25 10:28 ` sean o'malley
@ 1999-02-25 19:31 ` Robert G. Werner
1999-02-25 17:05 ` sean o'malley
1999-02-26 2:56 ` Douglas Godfrey
1 sibling, 1 reply; 14+ messages in thread
From: Robert G. Werner @ 1999-02-25 19:31 UTC (permalink / raw)
To: sean o'malley; +Cc: linuxppc-dev
I'm sorry, but I don't understand what you mean about Firewire begging for
clustering. It is an interesting technology in concept but it hasn't been
implemented for much of anything but some cameras and maybe a hard drive or
two. I know we are no where near having firwire working under Linux.
I think you will find that Firwire doesn't have a Price/performance ratio that
is adnvantageious over 100mb Ethernet and later Gb ethernet. That technology
has been implemented, is relatively mature and works under nearly all OSs.
Let's not get confused here, paralale processing works best when each node can
be given a chunk of the problem and associated data and then left alone. Every
time the cluster has to hit the network for anything more than a small message,
we lose performance relative to an SMP machine (internal data busses are always
going to be faster than going out to the network). Huge bandwidth isn't really
the biggest constraint with clusters. Breaking up the problem correctly is
much more important.
Just because you use Firewire doesn't mean you aren't going to use TCP/IP.
That is up to the drivers and what your basic networking stack can support. I
may be wrong on this but I haven't heard anything about Firewire requiring a
different protocol. Seems like it would be a lot of work to make one up too,
especially as we have a very nice TCP/IP stack in Linux.
Robert G. Werner
rwerner@lx1.microbsys.com
Impeach Conggress!!
If the girl you love moves in with another guy once, it's more than enough.
Twice, it's much too much. Three times, it's the story of your life.
On Thu, 25 Feb 1999, sean o'malley wrote:
[snip]
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-25 17:05 ` sean o'malley
@ 1999-02-25 22:30 ` Robert G. Werner
1999-02-26 0:35 ` Nathan Hurst
` (2 subsequent siblings)
3 siblings, 0 replies; 14+ messages in thread
From: Robert G. Werner @ 1999-02-25 22:30 UTC (permalink / raw)
To: sean o'malley; +Cc: linuxppc-dev
I like your idea of a large virtual machine (in fact that is the theory behind
one whole strand of clustering development). Currnetly this can be done with a
specic kind of problem: i.e. one that breaks up in to nice chuncks that each
individual machine can work on whithout too much input from other machines.
Once you start doing any sort of processing where the individual boxes in the
cluster need to be exchanging lots of data constantly, you would probably be
better off spending the money to dedicate one SMP box to the task.
Again, the big issue with problems that clusters address is not network
bandwidth. In fact, If I read the Beowulf howto correctly, there are many
cases where increasing the bandwidth available to a cluster can drasticlaly
degrade its performance relative to the investment you just made in networking
HW.
Whatever the case, you can treat a Beowulf cluster as a single virtual machine
for purposes of running some apps as it stands now. This doesn't mean that all
of the memory in the cluster is pooled (networks aren't and won't ever be fast
enough for this IMHO). But a program that is able to breakitself into numerous
sub chuncks can then send one of those chuncks to each node in the cluster for
actual processing. This could work just dandy on a typical office lan with a
bunch of machines runing linux with the PVM libs and the Beowulf extensions.
With a bit of creative scripting, you could easily dedicate unused processor
cycles to cluster tasks. However, this would only be useful, really, when
the network was quiet so that latencies (not bandwidth) were low and the
computers could rapidly pass messages back and forth.
Apple and others have talked big about how Firwire will make an intresting
Networking option. I don't think so because by the time such HW is finalized
and implemented, 100/T ethernet will be very mature and GB ethernet may be
quite reasonable too.
Price/performance of Ethernet is always going to be hard to beat because of its
wide use and off the shelf status. Firewire hasn't even been convincingly
implemented as a storage transmission technology let alone a networking one.
Finally, until Linux and other Unixen have drivers for Firewire, the whole
discussion of this technology for networking is moot anyway.
Robert G. Werner
rwerner@lx1.microbsys.com
Impeach Conggress!!
If the girl you love moves in with another guy once, it's more than enough.
Twice, it's much too much. Three times, it's the story of your life.
On Thu, 25 Feb 1999, sean o'malley wrote:
> Okay, Im talking about a more sophisticated model than the basic clustering
> scenario you proposed.
[snip]
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-25 17:05 ` sean o'malley
1999-02-25 22:30 ` Robert G. Werner
@ 1999-02-26 0:35 ` Nathan Hurst
1999-02-26 6:16 ` Troy Benjegerdes
1999-02-26 6:51 ` Cort Dougan
3 siblings, 0 replies; 14+ messages in thread
From: Nathan Hurst @ 1999-02-26 0:35 UTC (permalink / raw)
To: sean o'malley; +Cc: Robert G. Werner, linuxppc-dev
On Thu, 25 Feb 1999, sean o'malley wrote:
> In this model with a firewire backbone, you basically use the network as
> the machines bus you can drop harddrives on the network as well as stack of
> ram (of which you dont need tcp/ip to utilize ie you give your harddrive an
> ip number? *ponders*) The virtual machine will basically steal individual
> cycles from machines to process its information. Not in big chunks but in
> rather tiny chunks thus the worry about network overhead.
For this I think firewire is too slow also. It is only 400Mbps = 50MBps,
which is going to make an external memory access take 20ns per byte, with
an overhead of about 1us. I agree that firewire is better than 100Mb
ethernet, but only because firewire doesn't have the slow packet structure
of ethernet/IP.
> No, it wont be as efficient as an SMP machine, nor as fast. But lets say
> you work in an office building with 400 people and they each have their own
> computer. You could get a lot of extra juice out of say your secretary's
> machine while she is off on lunch or taking a phone call or while your
> talking to her.
Does firewire handle this many nodes? Corba already does this, btw.
> You also couldnt use TCP/IP for this model either because there is no
> "real" machine its existence is all the parts of the network, not one
> aspect of it. You dont really want to assign harddrives IP numbers nor
> would you want to assign it to blocks of ram or scanners on the network
> when in fact you dont need them..
How would you reference them if they don't have some form of reference
number?
njh
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-25 10:28 ` sean o'malley
1999-02-25 19:31 ` Robert G. Werner
@ 1999-02-26 2:56 ` Douglas Godfrey
1 sibling, 0 replies; 14+ messages in thread
From: Douglas Godfrey @ 1999-02-26 2:56 UTC (permalink / raw)
To: linuxppc-dev
At 5:28 AM -0500 2/25/99, sean o'malley wrote: [Re: PowerPC Beowulf Who?]
>Brief:
>Firewire is _begging_ for clustering/Assymetrical processing (think about a
>block of networked RAM). Its almost a waste not to use it for that.
>
>Second its begging to _not_ to use TCP/IP
>
>Third I dont think it needs routers/switches and it definately doesnt need
>as much work to install a network vs fibre.
>
>Fourth if someone wants to hack at a firewire driver try (I dont have a
>firewire)
>http://www.edu.uni-klu.ac.at/~epirker/ieee1394.html
>
>Fifth, I really need to read up on this stuff =)
>
>Sixth, Compaq is selling Linux clusters with the new Alphas. (or so i read
>somewhere)
>
snip
Marathon Computer <http://www.marathoncomputer.com> is making rack mountable
cases for the iMAC which can put 2 iMAC CPUs in a 1U ISO rack mount case.
This lets you put 120 iMACs and 8 16way 100baseT switches in a single rack.
You can mount an entire 266mhz PPC, 120 CPU Beowulf cluster in a single rack.
Thanx...
Doug
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-25 17:05 ` sean o'malley
1999-02-25 22:30 ` Robert G. Werner
1999-02-26 0:35 ` Nathan Hurst
@ 1999-02-26 6:16 ` Troy Benjegerdes
1999-02-26 6:51 ` Cort Dougan
3 siblings, 0 replies; 14+ messages in thread
From: Troy Benjegerdes @ 1999-02-26 6:16 UTC (permalink / raw)
To: sean o'malley; +Cc: Robert G. Werner, linuxppc-dev
On Thu, 25 Feb 1999, sean o'malley wrote:
>
> Okay, Im talking about a more sophisticated model than the basic clustering
> scenario you proposed.
> I think for your model maybe TCP/IP would be better.
>
> I thinking something a bit more sophisticated and it may in fact it maybe
> called something quite different =)
>
> what about a virtual computer that runs on the network, but is non-existent
> as an individual machine. It is made up of the entire network.. You can
> submit jobs to it and it will process them with the available resources of
> the entire network.
>
> In this model with a firewire backbone, you basically use the network as
> the machines bus you can drop harddrives on the network as well as stack of
> ram (of which you dont need tcp/ip to utilize ie you give your harddrive an
> ip number? *ponders*) The virtual machine will basically steal individual
> cycles from machines to process its information. Not in big chunks but in
> rather tiny chunks thus the worry about network overhead.
The think about fast and Gigabit ethernet is the cheap availability of
*switches* which allow machines to use the full bandwidth available on a
100 MB link to talk to any other machine connected to the switch. This
also improves latency since the machine doesn't have to wait until
everyone else quits talking.
Now, if you want to drop extra disks on the Network, look at Fibre channel
and UMN's GFS filesystem. One can also get Fibre Channel switches so
scaling the size of a cluster is easy. http://gfs.lcse.umn.edu/
Until there are Firewire switches, I don't see it as much use for
clusters.
>
> Basically its taking the SMP model and exploding it to a network.
>
> No, it wont be as efficient as an SMP machine, nor as fast. But lets say
> you work in an office building with 400 people and they each have their own
> computer. You could get a lot of extra juice out of say your secretary's
> machine while she is off on lunch or taking a phone call or while your
> talking to her.
>
It sounds like what you're actually thinking of is transparent process
migration and load balancing.. Take a look at
http://www.cnds.jhu.edu/mirrors/mosix/
> You also couldnt use TCP/IP for this model either because there is no
> "real" machine its existence is all the parts of the network, not one
> aspect of it. You dont really want to assign harddrives IP numbers nor
> would you want to assign it to blocks of ram or scanners on the network
> when in fact you dont need them..
>
> And yes I realize it is probably more impractical at this stage than
> functional, but it would be cool. =)
>
> Sean
>
>
> >I'm sorry, but I don't understand what you mean about Firewire begging for
> >clustering. It is an interesting technology in concept but it hasn't been
> >implemented for much of anything but some cameras and maybe a hard drive or
> >two. I know we are no where near having firwire working under Linux.
> >I think you will find that Firwire doesn't have a Price/performance ratio that
> >is adnvantageious over 100mb Ethernet and later Gb ethernet. That technology
> >has been implemented, is relatively mature and works under nearly all OSs.
> >Let's not get confused here, paralale processing works best when each
> >node can
> >be given a chunk of the problem and associated data and then left alone.
> >Every
> >time the cluster has to hit the network for anything more than a small
> >message,
> >we lose performance relative to an SMP machine (internal data busses are
> >always
> >going to be faster than going out to the network). Huge bandwidth isn't
> >really
> >the biggest constraint with clusters. Breaking up the problem correctly is
> >much more important.
> >
> >Just because you use Firewire doesn't mean you aren't going to use TCP/IP.
> >That is up to the drivers and what your basic networking stack can support. I
> >may be wrong on this but I haven't heard anything about Firewire requiring a
> >different protocol. Seems like it would be a lot of work to make one up too,
> >especially as we have a very nice TCP/IP stack in Linux.
> >
> >Robert G. Werner
> >rwerner@lx1.microbsys.com
> >Impeach Conggress!!
> >
> >If the girl you love moves in with another guy once, it's more than enough.
> >Twice, it's much too much. Three times, it's the story of your life.
> >
> >On Thu, 25 Feb 1999, sean o'malley wrote:
> >
> >[snip]
>
>
>
>
>
--------------------------------------------------------------------------
| Troy Benjegerdes | troy@microux.com | hozer@drgw.net |
| Unix is user friendly... You just have to be friendly to it first. |
| This message composed with 100% free software. http://www.gnu.org |
--------------------------------------------------------------------------
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-25 17:05 ` sean o'malley
` (2 preceding siblings ...)
1999-02-26 6:16 ` Troy Benjegerdes
@ 1999-02-26 6:51 ` Cort Dougan
3 siblings, 0 replies; 14+ messages in thread
From: Cort Dougan @ 1999-02-26 6:51 UTC (permalink / raw)
To: sean o'malley; +Cc: Robert G. Werner, linuxppc-dev
What you describe sounds like a combination of plan9 and amoeba.
}Okay, Im talking about a more sophisticated model than the basic clustering
}scenario you proposed.
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: PowerPC Beowulf Who?
1999-02-22 17:35 PowerPC Beowulf Who? Campbell, Marc
1999-02-23 6:14 ` Robert G. Werner
@ 1999-03-02 11:00 ` Martin Konold
1 sibling, 0 replies; 14+ messages in thread
From: Martin Konold @ 1999-03-02 11:00 UTC (permalink / raw)
To: Campbell, Marc; +Cc: extreme-linux, beowulf, linuxppc-dev, linuxppc-user
On Mon, 22 Feb 1999, Campbell, Marc wrote:
> Any suggestions on who to contact for PowerPC & PowerPC/AltiVec Beowulf
> development, integration, sales, and/or support?
IBM recently made me such an offer based on their SP2 switch technology.
Regards,
-- martin
// Martin Konold, Herrenbergerstr. 14, 72070 Tuebingen, Germany //
// Email: konold@kde.org //
Anybody who's comfortable using KDE should use it. Anyone who wants to
tell other people what they should be using can go to work for Microsoft.
[[ This message was sent via the linuxppc-dev mailing list. Replies are ]]
[[ not forced back to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. To unsubscribe from linuxppc-dev, send ]]
[[ the message 'unsubscribe' to linuxppc-dev-request@lists.linuxppc.org ]]
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~1999-03-02 11:00 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
1999-02-22 17:35 PowerPC Beowulf Who? Campbell, Marc
1999-02-23 6:14 ` Robert G. Werner
1999-02-23 22:13 ` Campbell, Marc
1999-02-23 23:22 ` Robert G. Werner
1999-02-25 4:06 ` Troy Benjegerdes
1999-02-25 10:28 ` sean o'malley
1999-02-25 19:31 ` Robert G. Werner
1999-02-25 17:05 ` sean o'malley
1999-02-25 22:30 ` Robert G. Werner
1999-02-26 0:35 ` Nathan Hurst
1999-02-26 6:16 ` Troy Benjegerdes
1999-02-26 6:51 ` Cort Dougan
1999-02-26 2:56 ` Douglas Godfrey
1999-03-02 11:00 ` Martin Konold
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).