From: Dave Chinner <david@fromorbit.com>
To: Jianshen Liu <ljishen@gmail.com>
Cc: Carlos Maiolino <cmaiolino@redhat.com>, linux-xfs@vger.kernel.org
Subject: Re: Question: Modifying kernel to handle all I/O requests without page cache
Date: Sat, 28 Sep 2019 08:17:15 +1000 [thread overview]
Message-ID: <20190927221715.GM16973@dread.disaster.area> (raw)
In-Reply-To: <CAMgD0BqXq+zz46MEzZ8=pAAXZo_7s1vcpGQKJyby9EZhYOcVNw@mail.gmail.com>
On Thu, Sep 26, 2019 at 06:42:43PM -0700, Jianshen Liu wrote:
> > But if you are trying to create benchmarks for a specific application, if your
> > benchmarks uses DIO or not, will depend on if the application uses DIO or not.
>
> This is my main question. I want running an application without
> involving page caching effects even when the application does not
> support DIO.
LD_PRELOAD wrapper for the open() syscall. Check that the target is
a file, then add O_DIRECT to the open flags.
Won't help you for mmap() access that will always use the page
cache, though, so things like executables will always use the page
cache regardless of what tricks you try to play.
So, as Carlos has said, what you want to do is largely impossible to
acheive.
> > All I/O requests submitted using direct IO must be aligned. So, if the
> > application does not issue aligned requests, the IO requests will fail.
>
> Yes, this is one of the difficulties in my problem. The application
> may not issue offset, length, buffer addressed aligned I/O. Thus, I
> cannot blindly convert application I/O to DIO within the kernel.
LD_PRELOAD wrapper to bounce buffer unaligned read/write() requests.
> > I will hit the same point again :) and my question is: Why? :) Will you be using
> > a custom kernel? With this modification? If not, you will not be gathering
> > trustable data anyway.
>
> I created a loadable module to patch a vanilla kernel using the kernel
> livepatching mechanism.
That's just asking for trouble. I wouldn't trust a kernel that has
been modified in that way as far as I could throw it.
> > If you are trying to measure an application performance on solution X, well,
> > it is pointless to measure direct IO if the application does not use it or
> > vice-versa, so, modifying an application, again, is not what you will want to do
> > for benchmarking, for sure.
>
> The point is that I'm not trying to measure the performance of an
> application on solution X. I'm trying to generate a
> platform-independent reference unit for the combination of a storage
> device and the application.
Sounds like an exercise that has no practical use to me - the model
will have to be so generic and full of compromises that it won't be
relevant to real world situations....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
prev parent reply other threads:[~2019-09-27 22:17 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-25 22:51 Question: Modifying kernel to handle all I/O requests without page cache Jianshen Liu
2019-09-26 12:39 ` Carlos Maiolino
2019-09-27 1:42 ` Jianshen Liu
2019-09-27 10:39 ` Carlos Maiolino
2019-09-27 22:17 ` Dave Chinner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190927221715.GM16973@dread.disaster.area \
--to=david@fromorbit.com \
--cc=cmaiolino@redhat.com \
--cc=linux-xfs@vger.kernel.org \
--cc=ljishen@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox