From: Jens Axboe <axboe@suse.de>
To: "Jeff V. Merkey" <jmerkey@wolfmountaingroup.com>
Cc: "Jeff V. Merkey" <jmerkey@soleranetworks.com>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: 2.6.9 reporting 1 Gigabyte/second throughput on bio's, timer skew possible?
Date: Sun, 13 Nov 2005 20:36:25 +0100 [thread overview]
Message-ID: <20051113193625.GI3699@suse.de> (raw)
In-Reply-To: <4375C916.8020804@wolfmountaingroup.com>
On Sat, Nov 12 2005, Jeff V. Merkey wrote:
> Jens Axboe wrote:
>
> >On Fri, Nov 11 2005, Jeff V. Merkey wrote:
> >
> >
> >>I have allocated 393,216 bio buffers I statically maintain in a chain
> >>and am running the dsfs file system with 3 x gigabit links fully
> >>saturated. meta-data
> >>increases the write sizes to 720 MB/Second on dual 9500 controllers with
> >>8 drives each (total of 16) 7200 RPM Drives. I am seeing some
> >>congestion and bursting on the bio chains as they are submitted.
> >>
>
>
> >16 disks on 2 controllers, I'm 100% sure they are lots of people
> >pushing 2.6 much further than that! I wouldn't evne call that a big
> >setup.
> >
> >
> Probably not for this type of application.
>
> >
> >
> >>DSFS dynamically generates html status files form within the file
> >>system. When the system gets somewhat behind, I am seeing bursts > 1
> >>GB/Second which exceeds the theoretical limit of the bus. I have a
> >>timer function that runs every second and profiles the I/O throughput
> >>created by DSFS with bio submissions and captured packets. I am asking
> >>if there is clock skew at these data rates with use of the timer
> >>functions. The system appears to be sustaining 1GB/Second throughput on
> >>dual controllers. I have verified through data rates the system is
> >>sustaining 800 megabytes/second with these 1GB/S bursts. I am curious
> >>if there is potentially timer skew at these higher rates since I am
> >>having a hard time accepting that I can push 1GB/S through a bus rated
> >>at only 850 MB/S for DMA based transfers. The unit is accessible by
> >>
> >>
> >
> >Note that the linux io stats accounting in 2.6.9 accounts queued io, not
> >io completions. So it's quite possible to have burst rates > bus speeds
> >for async io. 2.6.15-rc1 change this.
> >
> >
> >
> So you are willing to log into the unit and validate these numbers? I
> would like for an
> someone other than me to validate I am seeing these rates.
If you average the bandwidth over a time long enough to eliminate the
bursty queueing rates, your average rage should drop to what the
hardware can actually do. Or dig out the patch from 2.6.15-rc1 for
ll_rw_blk.c and apply it to 2.6.9, find it here:
http://kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=d72d904a5367ad4ca3f2c9a2ce8c3a68f0b28bf0;hp=d83c671fb7023f69a9582e622d01525054f23b66
--
Jens Axboe
next prev parent reply other threads:[~2005-11-13 19:35 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-11-11 22:58 2.6.9 reporting 1 Gigabyte/second throughput on bio's, timer skew possible? Jeff V. Merkey
2005-11-12 9:51 ` Jens Axboe
2005-11-12 10:51 ` Jeff V. Merkey
2005-11-13 19:36 ` Jens Axboe [this message]
2005-11-13 21:00 ` jmerkey
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20051113193625.GI3699@suse.de \
--to=axboe@suse.de \
--cc=jmerkey@soleranetworks.com \
--cc=jmerkey@wolfmountaingroup.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox