From: green@linuxhacker.ru
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
devel@driverdev.osuosl.org,
Andreas Dilger <andreas.dilger@intel.com>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Bobi Jam <bobijam.xu@intel.com>,
Bob Glossman <bob.glossman@intel.com>,
Oleg Drokin <oleg.drokin@intel.com>
Subject: [PATCH 01/10] staging/lustre/osc: shorten IO calling path
Date: Wed, 25 Mar 2015 21:53:17 -0400 [thread overview]
Message-ID: <1427334806-31466-2-git-send-email-green@linuxhacker.ru> (raw)
In-Reply-To: <1427334806-31466-1-git-send-email-green@linuxhacker.ru>
From: Bobi Jam <bobijam.xu@intel.com>
By using osc_io_unplug_aync() for osc_queue_sync_pages() to shorten
the IO calling path, to reduce the chance of stack overflow.
Signed-off-by: Bobi Jam <bobijam.xu@intel.com>
Signed-off-by: Bob Glossman <bob.glossman@intel.com
Reviewed-on: http://review.whamcloud.com/11612
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-3188
Reviewed-by: Jinshan Xiong <jinshan.xiong@intel.com>
Reviewed-by: Niu Yawei <yawei.niu@intel.com>
Signed-off-by: Oleg Drokin <oleg.drokin@intel.com>
---
drivers/staging/lustre/lustre/osc/osc_cache.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/lustre/lustre/osc/osc_cache.c b/drivers/staging/lustre/lustre/osc/osc_cache.c
index 7022ed4..d44b3d4 100644
--- a/drivers/staging/lustre/lustre/osc/osc_cache.c
+++ b/drivers/staging/lustre/lustre/osc/osc_cache.c
@@ -2613,7 +2613,7 @@ int osc_queue_sync_pages(const struct lu_env *env, struct osc_object *obj,
}
osc_object_unlock(obj);
- osc_io_unplug(env, cli, obj, PDL_POLICY_ROUND);
+ osc_io_unplug_async(env, cli, obj);
return 0;
}
--
2.1.0
next prev parent reply other threads:[~2015-03-26 1:56 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-26 1:53 [PATCH 00/10] Lustre fixes green
2015-03-26 1:53 ` green [this message]
2015-03-26 2:04 ` [PATCH v2 01/10] staging/lustre/osc: shorten IO calling path green
2015-03-26 10:09 ` Greg Kroah-Hartman
2015-03-26 1:53 ` [PATCH 02/10] staging/lustre/mdc: Handle empty but non-zero acl xattr green
2015-03-26 1:53 ` [PATCH 03/10] staging/lustre/ptlrpc: false alarm in AT network latency measuring green
2015-03-26 1:53 ` [PATCH 04/10] staging/lustre/mgc: check the import stat for lprocfs green
2015-03-26 1:53 ` [PATCH 05/10] staging/lustre/mgc: detach MGC dev on error green
2015-03-26 1:53 ` [PATCH 06/10] staging/lustre/lov: don't crash accessing LOV object with FID{0,0} green
2015-03-26 1:53 ` [PATCH 07/10] staging/lustre/ptlrpc: fix import state during replay green
2015-03-26 2:07 ` [PATCH v2 " green
2015-03-26 1:53 ` [PATCH 08/10] staging/lustre/llite: glimpse the inode before doing fiemap green
2015-03-26 1:53 ` [PATCH 09/10] staging/lustre: update timestamps after buiding rpc green
2015-03-26 1:53 ` [PATCH 10/10] staging/lustre/xattr: xattr data may be gone with lock held green
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1427334806-31466-2-git-send-email-green@linuxhacker.ru \
--to=green@linuxhacker.ru \
--cc=andreas.dilger@intel.com \
--cc=bob.glossman@intel.com \
--cc=bobijam.xu@intel.com \
--cc=devel@driverdev.osuosl.org \
--cc=gregkh@linuxfoundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=oleg.drokin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).