From: Jan Stancek <jstancek@redhat.com>
To: ltp@lists.linux.it
Subject: [LTP] readahead02 failure on PPC
Date: Thu, 17 Mar 2016 17:17:36 -0400 (EDT) [thread overview]
Message-ID: <1136739411.10804114.1458249456113.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <SN1PR12MB078260DD2BC3C2091E1D9893EE8B0@SN1PR12MB0782.namprd12.prod.outlook.com>
----- Original Message -----
> From: "Kshitij Malik" <Kshitij.Malik@mitel.com>
> To: ltp@lists.linux.it
> Sent: Thursday, 17 March, 2016 6:40:58 PM
> Subject: [LTP] readahead02 failure on PPC
>
>
>
> Hi,
Hi,
we prefer plain text over html emails on this list.
>
> I'm working on PowerPc based board MPC8360 and we are running 4.1.13 kernel.
and from previous issue you reported, I recall this is 32bit environment.
> During our LTP run, we saw the following failure with readahead02 test.
>
> readahead02 0 TWARN : readahead02.c:353: using less cache than expected
> The log file is attached to this mail.
readahead has no effect. It's supposed to read file into cache, so we
expect change in used cache and that subsequent reads will be faster
(because data is already in memory).
>
> Please note that I'm running the latest LTP code and has the latest
> readahead02.c patch checked in yesterday.
> Can you please help me in finding out the solution for this issue?
Just to confirm, this is not an issue with passing 64bit parameter
in syscall(), can you try running it with this change and send us output?
diff --git a/testcases/kernel/syscalls/readahead/readahead02.c b/testcases/kernel/syscalls/readahead/readahead02.c
index 7dc308c03e5d..0b42542d0d13 100644
--- a/testcases/kernel/syscalls/readahead/readahead02.c
+++ b/testcases/kernel/syscalls/readahead/readahead02.c
@@ -236,7 +236,7 @@ static void read_testfile(int do_readahead, const char *fname, size_t fsize,
if (do_readahead) {
for (i = 0; i < fsize; i += readahead_size) {
- TEST(ltp_syscall(__NR_readahead, fd,
+ TEST(readahead(fd,
(off64_t) i, readahead_size));
if (TEST_RETURN != 0)
break;
> Please let me know if you need any other information.
If that makes no difference, "strace -T" could tell us how much time
is kernel spending in this syscall.
There was an issue with readahead on ppc a while ago, where readahead
was effectively ignored on memory-less nodes, but you should have that
patch already:
commit 6d2be915e589b58cb11418cbe1f22ff90732b6ac
Author: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Date: Thu Apr 3 14:48:23 2014 -0700
mm/readahead.c: fix readahead failure for memoryless NUMA nodes and limit readahead pages
Regards,
Jan
next prev parent reply other threads:[~2016-03-17 21:17 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-17 17:40 [LTP] readahead02 failure on PPC Kshitij Malik
2016-03-17 19:57 ` Kshitij Malik
2016-03-17 21:17 ` Jan Stancek [this message]
2016-03-17 22:03 ` Kshitij Malik
2016-03-18 8:42 ` Jan Stancek
2016-03-21 18:38 ` Kshitij Malik
2016-03-22 9:49 ` Jan Stancek
2016-03-22 15:18 ` Julio Cruz Barroso
2016-03-23 2:39 ` Julio Cruz Barroso
-- strict thread matches above, loose matches on Subject: below --
2016-03-23 2:54 Kshitij Malik
2016-03-23 5:46 ` Julio Cruz Barroso
2016-03-24 14:40 ` Kshitij Malik
2016-03-24 15:38 ` Jan Stancek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1136739411.10804114.1458249456113.JavaMail.zimbra@redhat.com \
--to=jstancek@redhat.com \
--cc=ltp@lists.linux.it \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox