linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Cluster Aware MD Driver
@ 2007-06-13 19:16 Xinwei Hu
  2007-06-13 19:50 ` Mike Snitzer
  0 siblings, 1 reply; 5+ messages in thread
From: Xinwei Hu @ 2007-06-13 19:16 UTC (permalink / raw)
  To: linux-raid

Hi all,

  Steven Dake proposed a solution* to make MD layer and tools to be cluster 
aware in early 2003. But it seems that no progressing is made since then. I'd 
like to pick this one up again. :)

  So far as I understand, Steven's proposal still applies to currently MD 
implementation mostly, except we have bitmap now. And bitmap can be 
workarounded via set_bitmap_file.

   The problem is that it seems we need a kernel<->userspace interface to sync 
the mddev struct across all nodes, but I don't find out how.

   I'm new to the MD driver, so correct me if I'm wrong. And you suggestions 
are really appreciated.

   Thanks.

* http://osdir.com/ml/raid/2003-01/msg00013.html

^ permalink raw reply	[flat|nested] 5+ messages in thread
* RE: Cluster Aware MD Driver
@ 2003-01-07 14:54 Cress, Andrew R
  0 siblings, 0 replies; 5+ messages in thread
From: Cress, Andrew R @ 2003-01-07 14:54 UTC (permalink / raw)
  To: 'Brian Jackson', Steven Dake; +Cc: opengfs-users, linux-raid


I also had a couple of comments below.  My .02.

Andy

-----Original Message-----
From: Brian Jackson
Sent: Saturday, January 04, 2003 4:06 PM
To: Steven Dake
Cc: opengfs-users@lists.sourceforge.net; linux-raid@vger.kernel.org
Subject: Re: Cluster Aware MD Driver

[...]
One thing you might want to think about is that most people who are looking 
at a cluster capable raid are already going to have some sort of cluster 
management software. It might be useful to use the transports available. 
Maybe a plugin system that uses what you were saying as the default method, 
but can also use a plugin written to take advantage of an existing cluster 
management system. Just an idea. 

[Andy] I would agree, but state this more strongly, in that utilizing the
existing 
cluster management software would be a requirement for most of these
customers.  
Things like the heartbeat, network transport, election process, and
master/slave
relationships would be things that would come with that software, and need
to be 
administered from a common interface in the cluster, so using a plugin or
dynamic 
library for these functions sounds like a good approach.

> 
> The question you might be asking is, how do you protect from each server 
> overwritting similiar data such as the superblock or resync data.  
[...]
>              The writes will default to on to ensure that non-clusters 
> work properly even with autostart.

Maybe the ability to write or not could be a mdadm switch. Something like: 

mdadm -A --non-master 

that would keep the changes to the MD drivers to a minimum(I think, but I 
may be thinking the wrong way), but require manual intervention if the 
master were to die(or at least some sort of outside intervention). 

[Andy] The outside intervention would be from the cluster management
software, since
I don't think manual intervention would do.  If there were an API to toggle
this, the 
cluster management software would be able to change who the master was if
the master 
node went down.  

[...]

^ permalink raw reply	[flat|nested] 5+ messages in thread
[parent not found: <200301041016.10224.vittorio.ballestra@infogestnet.it>]

end of thread, other threads:[~2007-06-13 19:50 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-06-13 19:16 Cluster Aware MD Driver Xinwei Hu
2007-06-13 19:50 ` Mike Snitzer
  -- strict thread matches above, loose matches on Subject: below --
2003-01-07 14:54 Cress, Andrew R
     [not found] <200301041016.10224.vittorio.ballestra@infogestnet.it>
     [not found] ` <20030104164314.26621.qmail@escalade.vistahp.com>
2003-01-04 19:13   ` Steven Dake
2003-01-04 21:06     ` Brian Jackson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).