cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
* [Cluster-devel] Performance Issue with multiple dataserver
       [not found]                         ` <A1A98FD909DC5248811A65C5D18E9C761DF6A1CC2F@BLRX7MCDC202.AMER.DELL.COM>
@ 2011-05-24 11:44                           ` Steven Whitehouse
  2011-05-24 13:17                             ` J. Bruce Fields
  0 siblings, 1 reply; 2+ messages in thread
From: Steven Whitehouse @ 2011-05-24 11:44 UTC (permalink / raw)
  To: cluster-devel.redhat.com

/Hi,

On Tue, 2011-05-24 at 17:09 +0530, Taousif_Ansari at DELLTEAM.com wrote:
> Hi Bruce, Shyam
> 
>  As mentioned here http://wiki.linux-nfs.org/wiki/index.php/PNFS_server_projects gfs2 is also having issues(crashes, performance), so instead of going for gfs2 can we debug spNFS itself to get high performance?
> 
> 
> -Taousif
> 
As far as I'm aware that is historical information. If there are still
problems with GFS2, then please report them so we can work on them,

Steve.

> -----Original Message-----
> From: Iyer, Shyam 
> Sent: Thursday, May 19, 2011 8:08 PM
> To: Ansari, Taousif - Dell Team; bfields at fieldses.org
> Cc: linux-nfs at vger.kernel.org
> Subject: RE: Performance Issue with multiple dataserver
> 
> 
> 
> > -----Original Message-----
> > From: linux-nfs-owner at vger.kernel.org [mailto:linux-nfs-
> > owner at vger.kernel.org] On Behalf Of Ansari, Taousif - Dell Team
> > 
> > Can you please elaborate GFS2-setup a bit more...
> 
> 
> I guess Bruce is saying the step-by-step procedure is not written up...
> 
> Create a Redhat cluster using the shared block storage(iSCSI in your case I guess). You would get documentation on creating a RH cluster in many places..
> 
> All the MDSs and the DSs need to be part of the cluster.
> 
> Format GFS2 on the shared iSCSI storage.
> 
> Mount the GFS2 formatted iSCSI storage on all the MDSs and DSs and export them via NFS. Use Benny's tree for NFS.
> 
> The GFS2 cluster backend is your glue to scale the MDSes and DSes.
> 
> > 
> > -----Original Message-----
> > From: J. Bruce Fields [mailto:bfields at fieldses.org]
> > Sent: Thursday, May 19, 2011 7:14 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: linux-nfs at vger.kernel.org
> > Subject: Re: Performance Issue with multiple dataserver
> > 
> > On Thu, May 19, 2011 at 06:44:59PM +0530, Taousif_Ansari at DELLTEAM.com
> > wrote:
> > > Then what should I follow, and what details are needed....
> > 
> > There isn't really any supported server-side pNFS.
> > 
> > The closest is the GFS2-based code, for which you need to install
> > Benny's latest tree, configure a shared block device, create a GFS2
> > filesystem on it, mount it across all DS's and the MDS, and export it
> > from all of them--but I don't believe anyone has written step-by-step
> > instructions for that.
> > 
> > --b.
> > 
> > >
> > > -----Original Message-----
> > > From: linux-nfs-owner at vger.kernel.org [mailto:linux-nfs-
> > owner at vger.kernel.org] On Behalf Of J. Bruce Fields
> > > Sent: Thursday, May 19, 2011 6:43 PM
> > > To: Ansari, Taousif - Dell Team
> > > Cc: linux-nfs at vger.kernel.org
> > > Subject: Re: Performance Issue with multiple dataserver
> > >
> > > On Thu, May 19, 2011 at 06:09:21PM +0530, Taousif_Ansari at DELLTEAM.com
> > wrote:
> > > > I have followed the way given on http://wiki.linux-
> > nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .
> > >
> > > Oh.  As noted there, spnfs is unmaintained.
> > >
> > > And, in any case, we'd need many more details about your setup.
> > >
> > > --b.
> > >
> > > >
> > > > -Taousif
> > > >
> > > > -----Original Message-----
> > > > From: J. Bruce Fields [mailto:bfields at fieldses.org]
> > > > Sent: Thursday, May 19, 2011 5:20 PM
> > > > To: Ansari, Taousif - Dell Team
> > > > Cc: linux-nfs at vger.kernel.org
> > > > Subject: Re: Performance Issue with multiple dataserver
> > > >
> > > > On Thu, May 19, 2011 at 10:56:44AM +0530,
> > Taousif_Ansari at DELLTEAM.com wrote:
> > > > > Hi,
> > > > >
> > > > > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar)
> > and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded
> > from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on
> > Fedora 14.
> > > >
> > > > So you're using GFS2 on the server?  With what sort of storage?
> > > >
> > > > --b.
> > > >
> > > > >
> > > > > Extremely sorry for causing confusing .
> > > > > -----Original Message-----
> > > > > From: J. Bruce Fields [mailto:bfields at fieldses.org]
> > > > > Sent: Wednesday, May 18, 2011 9:43 PM
> > > > > To: Ansari, Taousif - Dell Team
> > > > > Cc: linux-nfs at vger.kernel.org
> > > > > Subject: Re: Performance Issue with multiple dataserver
> > > > >
> > > > > You sent this message as a reply to an unrelated message, which
> > is
> > > > > confusing to those of us with threaded mail readers.
> > > > >
> > > > > On Wed, May 18, 2011 at 05:24:45PM +0530,
> > Taousif_Ansari at DELLTEAM.com wrote:
> > > > > > I have done pNFS setup with single Dataserver and Two
> > Dataserver and ran the IOzone tool on both, I found that the
> > performance with multiple dataservers is less than the performance with
> > single dataservers.
> > > > >
> > > > > What are you using as the server, and what as the client?
> > > > >
> > > > > --b.
> > > > >
> > > > > >
> > > > > > Here are some numbers, which were captured by the IOzone tool.
> > > > > >
> > > > > >
> > > > > > 							  4	  8	 16	 32
> > 	 64	 128	 256	 512	1024	<== Record Length in KB
> > > > > > With Single Dataserver:
> > > > > > Read operation for file size 1 MB-		66415	66359	63630
> > 	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> > > > > > Write operation for file size 1 MB-		18827	16920	18846
> > 	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> > > > > >
> > > > > > With Two Dataservers :
> > > > > > Read operation for file size 1 MB-		36882	381198
> > 	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> > > > > > Write operation for file size 1 MB-		5461	4661	5586
> > 	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> > > > > >
> > > > > >
> > > > > > Can somebody tell me What could be the issue....
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe
> > linux-nfs" in
> > > > > > the body of a message to majordomo at vger.kernel.org
> > > > > > More majordomo info at  http://vger.kernel.org/majordomo-
> > info.html
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-nfs"
> > in
> > > the body of a message to majordomo at vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to majordomo at vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html




^ permalink raw reply	[flat|nested] 2+ messages in thread

* [Cluster-devel] Performance Issue with multiple dataserver
  2011-05-24 11:44                           ` [Cluster-devel] Performance Issue with multiple dataserver Steven Whitehouse
@ 2011-05-24 13:17                             ` J. Bruce Fields
  0 siblings, 0 replies; 2+ messages in thread
From: J. Bruce Fields @ 2011-05-24 13:17 UTC (permalink / raw)
  To: cluster-devel.redhat.com

On Tue, May 24, 2011 at 12:44:19PM +0100, Steven Whitehouse wrote:
> /Hi,
> 
> On Tue, 2011-05-24 at 17:09 +0530, Taousif_Ansari at DELLTEAM.com wrote:
> > Hi Bruce, Shyam
> > 
> >  As mentioned here http://wiki.linux-nfs.org/wiki/index.php/PNFS_server_projects gfs2 is also having issues(crashes, performance), so instead of going for gfs2 can we debug spNFS itself to get high performance?
> > 
> > 
> > -Taousif
> > 
> As far as I'm aware that is historical information. If there are still
> problems with GFS2, then please report them so we can work on them,

Well, they may be nfs problems rather than gfs2 problems.

In either case, neither pnfs/gfs2 nor spnfs is a particularly mature
project; you will find bugs and performance problems in both.

I think a cluster-filesystem-based approach probably has the better
chance of getting merged earlier, as it solves a number of thorny
problems (such as how to do IO through the MDS) for you.  But it all
depends on what your goals are.  Either will require significant
development work to get into acceptable shape.

--b.



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2011-05-24 13:17 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <4DD2A084.1040905@moving-picture.com>
     [not found] ` <20110518080106.1159c5b8@notabene.brown>
     [not found]   ` <4DD39D39.7010805@moving-picture.com>
     [not found]     ` <A1A98FD909DC5248811A65C5D18E9C761DF69B1F18@BLRX7MCDC202.AMER.DELL.COM>
     [not found]       ` <20110518161239.GA16835@fieldses.org>
     [not found]         ` <A1A98FD909DC5248811A65C5D18E9C761DF69B2188@BLRX7MCDC202.AMER.DELL.COM>
     [not found]           ` <20110519115018.GA31978@fieldses.org>
     [not found]             ` <A1A98FD909DC5248811A65C5D18E9C761DF69B2432@BLRX7MCDC202.AMER.DELL.COM>
     [not found]               ` <20110519131236.GA32648@fieldses.org>
     [not found]                 ` <A1A98FD909DC5248811A65C5D18E9C761DF69B244C@BLRX7MCDC202.AMER.DELL.COM>
     [not found]                   ` <20110519134356.GB32648@fieldses.org>
     [not found]                     ` <A1A98FD909DC5248811A65C5D18E9C761DF69B247B@BLRX7MCDC202.AMER.DELL.COM>
     [not found]                       ` <DBFB1B45AF80394ABD1C807E9F28D157032A7822A8@BLRX7MCDC203.AMER.DELL.COM>
     [not found]                         ` <A1A98FD909DC5248811A65C5D18E9C761DF6A1CC2F@BLRX7MCDC202.AMER.DELL.COM>
2011-05-24 11:44                           ` [Cluster-devel] Performance Issue with multiple dataserver Steven Whitehouse
2011-05-24 13:17                             ` J. Bruce Fields

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).