From: Patrick Mansfield <patmans@us.ibm.com>
To: linux-scsi@vger.kernel.org
Subject: [PATCH] 0/7 per scsi_device queue lock patches
Date: Mon, 24 Mar 2003 17:53:37 -0800 [thread overview]
Message-ID: <20030324175337.A14957@beaverton.ibm.com> (raw)
The following patches against recent 2.5.x bk (pulled on march 24, should
apply fine on top of 2.5.66) leads to splitting the scsi queue_lock into a
pre-scsi_device lock rather than the current per-scsi_host lock.
The first 2 patches were already posted and discussed, and include changes
based on that discussion.
Hopefully there are no major issues with the first 4 patches - I would
like to have them pushed for inclusion in 2.5.
Patches are incremental (they overlap).
Patch descriptions:
patch 1: starved changes - use a list_head for starved queue's
patch 2: add missing scsi_queue_next_request calls
patch 3: consolidate single_lun code (this code goes away with patch 7)
patch 4: cleanup/consolidate code in scsi_request_fn
patch 5: alloc a request_queue on each scsi_alloc_sdev call
patch 6: add and use a per-scsi_device queue_lock
patch 7: fix single_lun code for per-scsi_device queue_lock.
If you run with all patches applied let me know your results.
I've run tests across 20 drives and 2 qla 2300 adapters on an 8 CPU NUMAQ
system, using the feral driver with can_queue set to 50 and queue_depth
set to 16.
I ran with 20 processes, each process just continuously re-reads the first
block of a different disk (using O_DIRECT). Not much of a benchmark.
With all the patches applied (20 processes each reading 20000 times), I got:
1.03user 81.91system 0:28.46elapsed
Without patches:
1.09user 153.36system 0:34.61elapsed
If anyone wants, I can get vmstat or oprofile before/after results.
I ran some write-fsync tests (on file systems mounted across 20 drives),
but hitting can_queue without the starved changes causes variations in
performance numbers, and I'm getting IO hangs (with and without the above
patches, but more so without the patches). I haven't figured out what is
wrong. I'm also getting occasional hangs on boot (with or without patches,
NUMAQ + feral + isp 1020 + qla 2300). I have not had any problems on a
netfinity (more standard x86) box with an aic adapter.
Thanks.
-- Patrick Mansfield
next reply other threads:[~2003-03-25 1:56 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-03-25 1:53 Patrick Mansfield [this message]
2003-03-25 1:54 ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Patrick Mansfield
2003-03-25 2:02 ` [PATCH] 2/7 add missing scsi_queue_next_request calls Patrick Mansfield
2003-03-25 2:02 ` [PATCH] 3/7 consolidate single_lun code Patrick Mansfield
2003-03-25 2:03 ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Patrick Mansfield
2003-03-25 2:03 ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Patrick Mansfield
2003-03-25 2:03 ` [PATCH] 6/7 add and use a per-scsi_device queue_lock Patrick Mansfield
2003-03-25 2:04 ` [PATCH] 7/7 fix single_lun code for " Patrick Mansfield
2003-03-25 21:23 ` Luben Tuikov
2003-03-26 21:47 ` Patrick Mansfield
2003-03-26 22:12 ` Luben Tuikov
2003-03-25 21:03 ` [PATCH] 6/7 add and use a " Luben Tuikov
2003-03-26 21:33 ` Patrick Mansfield
2003-03-25 21:20 ` James Bottomley
2003-03-26 2:01 ` Patrick Mansfield
2003-03-27 16:09 ` James Bottomley
2003-03-28 0:30 ` Patrick Mansfield
2003-03-25 7:12 ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Christoph Hellwig
2003-03-25 7:18 ` Jens Axboe
2003-03-25 21:32 ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Luben Tuikov
2003-03-26 0:58 ` Patrick Mansfield
2003-03-26 17:07 ` Luben Tuikov
2003-03-26 17:13 ` Patrick Mansfield
2003-03-26 17:25 ` Luben Tuikov
2003-03-25 20:36 ` [PATCH] 3/7 consolidate single_lun code Luben Tuikov
2003-03-26 19:11 ` Patrick Mansfield
2003-03-26 22:05 ` Luben Tuikov
2003-03-27 22:43 ` Patrick Mansfield
2003-03-28 15:09 ` Luben Tuikov
2003-03-28 20:06 ` Patrick Mansfield
2003-03-25 20:50 ` Luben Tuikov
2003-03-25 19:41 ` [PATCH] 2/7 add missing scsi_queue_next_request calls Luben Tuikov
2003-03-25 19:39 ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Luben Tuikov
2003-03-27 16:14 ` [PATCH] 0/7 per scsi_device queue lock patches James Bottomley
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20030324175337.A14957@beaverton.ibm.com \
--to=patmans@us.ibm.com \
--cc=linux-scsi@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox