public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Fragmentation Issue We Are Having
@ 2012-04-12  1:04 David Fuller
  2012-04-12  2:16 ` Dave Chinner
  2012-04-12  7:57 ` Brian Candler
  0 siblings, 2 replies; 15+ messages in thread
From: David Fuller @ 2012-04-12  1:04 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 1171 bytes --]

We seen to be having an issue whereby our database server
gets to 90% or higher fragmentation.  When it gets to this point
we would need to remove form production and defrag using the
xfs_fsr tool.  The server does get a lot of writes and reads.  Is
there something we can do to reduce the fragmentation or could
this be a result of hard disk tweaks we use or mount options?

here is some fo the tweaks we do:

/bin/echo "512" > /sys/block/sda/queue/read_ahead_kb
/bin/echo "10000" > /sys/block/sda/queue/nr_requests
/bin/echo "512" > /sys/block/sdb/queue/read_ahead_kb
/bin/echo "10000" > /sys/block/sdb/queue/nr_requests
/bin/echo "noop" > /sys/block/sda/queue/scheduler
/bin/echo "noop" > /sys/block/sdb/queue/scheduler


Adn here are the mount options on one of our servers:

 xfs     rw,noikeep,allocsize=256M,logbufs=8,sunit=128,swidth=2304

the sunit and swidth vary on each server based on disk drives.

We do use LVM on the volume where the mysql data is stored
as we need this for snapshotting.  Here is an example of a current state:

xfs_db -c frag -r /dev/mapper/vgmysql-lvmysql
actual 42586, ideal 3134, fragmentation factor 92.64%



Regards,
David Fuller

[-- Attachment #1.2: Type: text/html, Size: 1680 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread
* Re: Fragmentation Issue We Are Having
@ 2012-04-19 19:54 Richard Scobie
  0 siblings, 0 replies; 15+ messages in thread
From: Richard Scobie @ 2012-04-19 19:54 UTC (permalink / raw)
  To: xfs; +Cc: b.candler

Brian Candler wrote:

--------------------------

Ah, that's new to me. So with inode32 and
   sysctl fs.xfs.rotorstep=255
you can get roughly the same locality benefit for sequentially-written files
as inode64?  (Aside: if you have two processes writing files to two 
different
directories, will they end up mixing their files in the same AG? That
could hurt performance at readback time if reading them sequentially)

-----------------------------

The "filestreams" mount option may be of use here, see:

http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide//tmp/en-US/html/ch06s16.html

and page 17 of:

http://oss.sgi.com/projects/xfs/training/xfs_slides_06_allocators.pdf

Regards,

Richard

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2012-04-19 23:13 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-04-12  1:04 Fragmentation Issue We Are Having David Fuller
2012-04-12  2:16 ` Dave Chinner
2012-04-12  2:55   ` David Fuller
2012-04-12  4:24     ` Eric Sandeen
2012-04-12  7:57 ` Brian Candler
2012-04-13  0:09   ` David Fuller
2012-04-13  7:19     ` Brian Candler
2012-04-13  7:56       ` Dave Chinner
2012-04-13  8:17         ` Brian Candler
2012-04-17  0:26           ` Dave Chinner
2012-04-17  8:58             ` Brian Candler
2012-04-18  1:36               ` Dave Chinner
2012-04-18  9:00                 ` Brian Candler
2012-04-19 23:12                   ` Dave Chinner
  -- strict thread matches above, loose matches on Subject: below --
2012-04-19 19:54 Richard Scobie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox