* elevator question @ 2014-03-17 15:35 Marko Weber|8000 2014-03-17 15:39 ` Grozdan 2014-03-17 22:51 ` Stan Hoeppner 0 siblings, 2 replies; 9+ messages in thread From: Marko Weber|8000 @ 2014-03-17 15:35 UTC (permalink / raw) To: Xfs hello list, in the xfs faq i read elevator=noop is best when using ssd. i have the gentoo system on a ssd, but the larger data storage partition on softraid with some sata disks. is elevator=noop in this combo still best? thx for any cunstructive answer marko -- zbfmail - Mittendrin statt nur Datei! OpenDKIM, SPF, DSPAM, Greylisting, POSTSCREEN, AMAVIS, Mailgateways Mailfiltering, SMTP Service, Spam Abwehr, MX-Backup, Mailserver Backup Redundante Mailgateways, HA Mailserver, Secure Mailserver _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: elevator question 2014-03-17 15:35 elevator question Marko Weber|8000 @ 2014-03-17 15:39 ` Grozdan 2014-03-17 17:25 ` Marko Weber|8000 2014-03-17 22:51 ` Stan Hoeppner 1 sibling, 1 reply; 9+ messages in thread From: Grozdan @ 2014-03-17 15:39 UTC (permalink / raw) To: weber; +Cc: Xfs On Mon, Mar 17, 2014 at 4:35 PM, Marko Weber|8000 <weber@zbfmail.de> wrote: > > hello list, > in the xfs faq i read elevator=noop is best when using ssd. > i have the gentoo system on a ssd, but the larger data storage partition on > softraid with some sata disks. > is elevator=noop in this combo still best? > > thx for any cunstructive answer > > marko noop and deadline are best for SSDs. deadline is best for spinning disks with XFS, especially in RAID. Stay away from CFQ as it kills parallelism in XFS > > > > -- > zbfmail - Mittendrin statt nur Datei! > > OpenDKIM, SPF, DSPAM, Greylisting, POSTSCREEN, AMAVIS, Mailgateways > Mailfiltering, SMTP Service, Spam Abwehr, MX-Backup, Mailserver Backup > Redundante Mailgateways, HA Mailserver, Secure Mailserver > > _______________________________________________ > xfs mailing list > xfs@oss.sgi.com > http://oss.sgi.com/mailman/listinfo/xfs -- Yours truly _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: elevator question 2014-03-17 15:39 ` Grozdan @ 2014-03-17 17:25 ` Marko Weber|8000 2014-03-17 17:38 ` Shaun Gosse 2014-03-17 17:45 ` Emmanuel Florac 0 siblings, 2 replies; 9+ messages in thread From: Marko Weber|8000 @ 2014-03-17 17:25 UTC (permalink / raw) To: Xfs Am 2014-03-17 16:39, schrieb Grozdan: > On Mon, Mar 17, 2014 at 4:35 PM, Marko Weber|8000 <weber@zbfmail.de> > wrote: >> >> hello list, >> in the xfs faq i read elevator=noop is best when using ssd. >> i have the gentoo system on a ssd, but the larger data storage >> partition on >> softraid with some sata disks. >> is elevator=noop in this combo still best? >> >> thx for any cunstructive answer >> >> marko > > noop and deadline are best for SSDs. deadline is best for spinning > disks with XFS, especially in RAID. Stay away from CFQ as it kills > parallelism in XFS thx grozdan, but what, if i have mixed setup in a server ssd + sata ?? marko > >> >> >> >> -- >> zbfmail - Mittendrin statt nur Datei! >> >> OpenDKIM, SPF, DSPAM, Greylisting, POSTSCREEN, AMAVIS, Mailgateways >> Mailfiltering, SMTP Service, Spam Abwehr, MX-Backup, Mailserver Backup >> Redundante Mailgateways, HA Mailserver, Secure Mailserver >> >> _______________________________________________ >> xfs mailing list >> xfs@oss.sgi.com >> http://oss.sgi.com/mailman/listinfo/xfs _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: elevator question 2014-03-17 17:25 ` Marko Weber|8000 @ 2014-03-17 17:38 ` Shaun Gosse 2014-03-17 17:45 ` Emmanuel Florac 1 sibling, 0 replies; 9+ messages in thread From: Shaun Gosse @ 2014-03-17 17:38 UTC (permalink / raw) To: weber@zbfmail.de, Xfs Marko, I haven't done this myself, so use at your own risk, but from the archlinux wiki, there's an example of using udev to set one scheduler for non-rotational disks and another for rotational, which sounds like a good general solution for what you're looking for here: https://wiki.archlinux.org/index.php/Solid_State_Drives#Using_udev_for_one_device_or_HDD.2FSSD_mixed_environment Cheers, -Shaun -----Original Message----- From: xfs-bounces@oss.sgi.com [mailto:xfs-bounces@oss.sgi.com] On Behalf Of Marko Weber|8000 Sent: Monday, March 17, 2014 12:26 PM To: Xfs Subject: Re: elevator question Am 2014-03-17 16:39, schrieb Grozdan: > On Mon, Mar 17, 2014 at 4:35 PM, Marko Weber|8000 <weber@zbfmail.de> > wrote: >> >> hello list, >> in the xfs faq i read elevator=noop is best when using ssd. >> i have the gentoo system on a ssd, but the larger data storage >> partition on softraid with some sata disks. >> is elevator=noop in this combo still best? >> >> thx for any cunstructive answer >> >> marko > > noop and deadline are best for SSDs. deadline is best for spinning > disks with XFS, especially in RAID. Stay away from CFQ as it kills > parallelism in XFS thx grozdan, but what, if i have mixed setup in a server ssd + sata ?? marko > >> >> >> >> -- >> zbfmail - Mittendrin statt nur Datei! >> >> OpenDKIM, SPF, DSPAM, Greylisting, POSTSCREEN, AMAVIS, Mailgateways >> Mailfiltering, SMTP Service, Spam Abwehr, MX-Backup, Mailserver >> Backup Redundante Mailgateways, HA Mailserver, Secure Mailserver >> >> _______________________________________________ >> xfs mailing list >> xfs@oss.sgi.com >> http://oss.sgi.com/mailman/listinfo/xfs _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: elevator question 2014-03-17 17:25 ` Marko Weber|8000 2014-03-17 17:38 ` Shaun Gosse @ 2014-03-17 17:45 ` Emmanuel Florac 1 sibling, 0 replies; 9+ messages in thread From: Emmanuel Florac @ 2014-03-17 17:45 UTC (permalink / raw) To: weber; +Cc: Xfs Le Mon, 17 Mar 2014 18:25:32 +0100 Marko Weber|8000 <weber@zbfmail.de> écrivait: > thx grozdan, > but what, if i have mixed setup in a server ssd + sata ?? > Very easy, you can set the io scheduler for each device separately at will: echo 'noop' > /sys/block/sda/queue/scheduler echo 'deadline' > /sys/block/sdb/queue/scheduler You can put that in some startup script like /etc/rc.local. -- ------------------------------------------------------------------------ Emmanuel Florac | Direction technique | Intellique | <eflorac@intellique.com> | +33 1 78 94 84 02 ------------------------------------------------------------------------ _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: elevator question 2014-03-17 15:35 elevator question Marko Weber|8000 2014-03-17 15:39 ` Grozdan @ 2014-03-17 22:51 ` Stan Hoeppner 2014-03-18 9:26 ` Emmanuel Florac 1 sibling, 1 reply; 9+ messages in thread From: Stan Hoeppner @ 2014-03-17 22:51 UTC (permalink / raw) To: weber, Xfs On 3/17/2014 10:35 AM, Marko Weber|8000 wrote: > > hello list, > in the xfs faq i read elevator=noop is best when using ssd. > i have the gentoo system on a ssd, but the larger data storage partition > on softraid with some sata disks. > is elevator=noop in this combo still best? The elevator is per physical device. For SSD use noop. But with your setup is doesn't matter as md/RAID will run at rust speed. A better solution is to remove the SSD partition from the md/RAID array, reshape/shrink/rebuild the array to make it happy, whatever is necessary after removing the SSD partition. Now configure the SSD partition as bcache and enable writeback policy. This will increase the performance of the rust array to SSD level for writes, and for reads after frequently accessed data is cached in your SSD. -- Stan _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: elevator question 2014-03-17 22:51 ` Stan Hoeppner @ 2014-03-18 9:26 ` Emmanuel Florac 2014-03-18 10:18 ` Marko Weber|8000 0 siblings, 1 reply; 9+ messages in thread From: Emmanuel Florac @ 2014-03-18 9:26 UTC (permalink / raw) To: stan; +Cc: Xfs, weber Le Mon, 17 Mar 2014 17:51:02 -0500 vous écriviez: > Now configure the SSD partition as bcache and enable writeback policy. > This will increase the performance of the rust array to SSD level for > writes, and for reads after frequently accessed data is cached in > your SSD. +1 :) you may also try enhanceIO, flashcache... -- ------------------------------------------------------------------------ Emmanuel Florac | Direction technique | Intellique | <eflorac@intellique.com> | +33 1 78 94 84 02 ------------------------------------------------------------------------ _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: elevator question 2014-03-18 9:26 ` Emmanuel Florac @ 2014-03-18 10:18 ` Marko Weber|8000 2014-03-18 13:56 ` Emmanuel Florac 0 siblings, 1 reply; 9+ messages in thread From: Marko Weber|8000 @ 2014-03-18 10:18 UTC (permalink / raw) To: Xfs Hi, Am 2014-03-18 10:26, schrieb Emmanuel Florac: > Le Mon, 17 Mar 2014 17:51:02 -0500 vous écriviez: > >> Now configure the SSD partition as bcache and enable writeback policy. >> This will increase the performance of the rust array to SSD level for >> writes, and for reads after frequently accessed data is cached in >> your SSD. > > +1 :) you may also try enhanceIO, flashcache... i am barely new to this issue. What would you xfs guys recommend me? i prefer things i can do with kernel, so its dm-cache? bcache? the raid is an raid5 with 3 disks at moment. xfs filesystem. do i have to format it again and set special things? i use nobarrier at moment, any hints or links or guides or even tipps from you directly would be great, thanks marko _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: elevator question 2014-03-18 10:18 ` Marko Weber|8000 @ 2014-03-18 13:56 ` Emmanuel Florac 0 siblings, 0 replies; 9+ messages in thread From: Emmanuel Florac @ 2014-03-18 13:56 UTC (permalink / raw) To: weber; +Cc: Xfs Le Tue, 18 Mar 2014 11:18:04 +0100 Marko Weber|8000 <weber@zbfmail.de> écrivait: > i am barely new to this issue. What would you xfs guys recommend me? > i prefer things i can do with kernel, so its dm-cache? bcache? The easier will be bcache because it comes included with the kernel. > the raid is an raid5 with 3 disks at moment. xfs filesystem. > do i have to format it again and set special things? No, only the cache itself must be reformatted. > i use nobarrier at moment, if you're not using a hardware RAID controller with a battery-backed or flash-based cache, it may be risky. Depends mostly on your write activity and power supply stability, though. -- ------------------------------------------------------------------------ Emmanuel Florac | Direction technique | Intellique | <eflorac@intellique.com> | +33 1 78 94 84 02 ------------------------------------------------------------------------ _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2014-03-18 13:56 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2014-03-17 15:35 elevator question Marko Weber|8000 2014-03-17 15:39 ` Grozdan 2014-03-17 17:25 ` Marko Weber|8000 2014-03-17 17:38 ` Shaun Gosse 2014-03-17 17:45 ` Emmanuel Florac 2014-03-17 22:51 ` Stan Hoeppner 2014-03-18 9:26 ` Emmanuel Florac 2014-03-18 10:18 ` Marko Weber|8000 2014-03-18 13:56 ` Emmanuel Florac
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox