public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Aneesh Kumar K.V" <aneesh.kumar@hp.com>
To: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	opendlm-devel@lists.sourceforge.net,
	opengfs-users@lists.sourceforge.net,
	opengfs-devel@lists.sourceforge.net, linux-cluster@redhat.com
Subject: [ANNOUNCE] OpenSSI 1.0.0 released!!
Date: Sat, 31 Jul 2004 16:51:32 +0530	[thread overview]
Message-ID: <410B80BC.4060100@hp.com> (raw)

Hi,

Sorry for the cross post. I came across this on OpenSSI website. I guess 
others may also be interested.

-aneesh

The OpenSSI project leverages both HP's NonStop Clusters for Unixware 
technology and other open source technology to provide a full, highly 
available Single System Image environment for Linux.

Feature list:
1.  Cluster Membership
   * includes libcluster  that application can use
2. Internode Communication

3. Filesystem
    * support for CFS over ext3,  Lustre Lite
    * CFS can be used for the root
    * reopen of files, devices, ipc objects when processes move is supported
    * CFS supports file record locking and shared writable mapped files 
(along with all other standard POSIX capabilities
    * HA-CFS is configurable for the root or other filesystems
4. Process Management
     * almost all pieces there, including:
           o clusterwide PIDs
           o process migration and distributed rexec(), rfork() and 
migrate() with reopen of files, sockets, pipes, devices, etc.
           o vprocs
           o clusterwide signalling, get/setpriority
           o capabilities
           o distributed process groups, session, controlling terminal
           o surrogate origin functionality
           o no single points of failure (cleanup code to deal with 
nodedowns)
           o Mosix load leveler (with the process migration model from NSC)
           o clusterwide ptrace() and strace
           o clusterwide /proc/<pid>, ps, top, etc.

5. Devices
   * there is a clusterwide device model via the devfs code
   * each node mounts its devfs on /cluster/node#/dev and bind mounts it 
to /dev so all devices are visible and accessible from all nodes, but by 
default you see only local devices
   * a process on any node can open a device on any node
   * devices are reopened when processes move
   * processes retain a context, even if they move; the context 
determines which node's devices to access by defaul
6. IPC
   * all IPC objects/mechanisms are clusterwide:
          o pipes
          o fifos
          o signalling
          o message queues
          o semaphore
          o shared memory
          o Unix-domain sockets
          o Internet-domain sockets
  * reopen of IPC objects is there for process movement
  * nodedown handling is there for all IPC objects
7. Clusterwide TCP/IP
   * HA-LVS is integrated, with extensions
   * extension is that port redirection to servers in the cluster is 
automatic and doesn't have to be managed.
8. Kernel Data Replication Service
   * it is in there (cluster/ssi/clreg)
9. Shared Storage
   * we have tested shared FCAL and use it for HA-CFS
10. DLM
   * is integrated with CLMS and is HA
11. Sysadmin
   * services architecture has been made clusterwide
12. Init, Booting and Run Levels
   * system runs with a single init which will failover/restart on 
another node if the node it is on dies
13. Application Availability
  * application monitoring/restart provided by spawndaemon/keepalive
  * services started by RC on the initnode will automatically restart on 
a failure of the initnode
14. Timesync
  * NTP for now
15. Load Leveling
  * adapted the openMosix algorithm
  * for connection load balancing, using HA-LVS
  * load leveling is on by default
  * applications must be registered to load level
16. Packaging/Install
   * Have source patch, binary RPMs and CVS source options;
   *  Debian packages also available via ap-get repository.
   * First node is incremental to a standard Linux install
   * Other nodes install via netboot, PXEboot, DHCP and simple addnode 
command;
17. Object Interfaces
   * standard interfaces for objects work as expected
   * no new interfaces for object location or movement except for 
processes (rexec(), migrate(), and /proc/pid/goto to move a process)


             reply	other threads:[~2004-07-31 11:21 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-07-31 11:21 Aneesh Kumar K.V [this message]
2004-07-31 14:40 ` [ANNOUNCE] OpenSSI 1.0.0 released!! Kevin P. Fleming
2004-08-02 19:29   ` Bill Davidsen
2004-07-31 16:35 ` David Weinehall
2004-08-01 17:23 ` Daniel Phillips
     [not found] <2o0e0-6qx-5@gated-at.bofh.it>
     [not found] ` <m37jsk42hw.fsf@averell.firstfloor.org>
2004-08-02  6:30   ` Aneesh Kumar K.V
  -- strict thread matches above, loose matches on Subject: below --
2004-08-05  7:10 Clayton Weaver

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=410B80BC.4060100@hp.com \
    --to=aneesh.kumar@hp.com \
    --cc=linux-cluster@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=opendlm-devel@lists.sourceforge.net \
    --cc=opengfs-devel@lists.sourceforge.net \
    --cc=opengfs-users@lists.sourceforge.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox