* ionice and FUSE-based filesystems?
@ 2010-09-02 20:37 Chris Friesen
2010-09-03 17:38 ` [fuse-devel] " Manuel Amador (Rudd-O)
2010-09-03 18:57 ` Jens Axboe
0 siblings, 2 replies; 6+ messages in thread
From: Chris Friesen @ 2010-09-02 20:37 UTC (permalink / raw)
To: fuse-devel, axboe, Linux Kernel Mailing List
I'm curious about the limits of using ionice with multiple layers of
filesystems and devices.
In particular, we have a scenario with a FUSE-based filesystem running
on top of xfs on top of LVM, on top of software RAID, on top of spinning
disks. (Something like that, anyways.) The IO scheduler is CFQ.
In the above scenario would you expect the IO nice value of the writes
done by a task to be propagated all the way down to the disk writes? Or
would they get stripped off at some point?
Thanks,
Chris
--
Chris Friesen
Software Developer
GENBAND
chris.friesen@genband.com
www.genband.com
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [fuse-devel] ionice and FUSE-based filesystems?
2010-09-02 20:37 ionice and FUSE-based filesystems? Chris Friesen
@ 2010-09-03 17:38 ` Manuel Amador (Rudd-O)
2010-09-03 18:57 ` Jens Axboe
1 sibling, 0 replies; 6+ messages in thread
From: Manuel Amador (Rudd-O) @ 2010-09-03 17:38 UTC (permalink / raw)
To: Chris Friesen
Cc: fuse-devel@lists.sourceforge.net, axboe@kernel.dk,
Linux Kernel Mailing List
I also wanna know this!!
Manuel Amador (Rudd-O)
Cloud.com, Inc. -- http://www.cloud.com
On Sep 2, 2010, at 13:37, Chris Friesen <chris.friesen@genband.com>
wrote:
>
> I'm curious about the limits of using ionice with multiple layers of
> filesystems and devices.
>
> In particular, we have a scenario with a FUSE-based filesystem running
> on top of xfs on top of LVM, on top of software RAID, on top of
> spinning
> disks. (Something like that, anyways.) The IO scheduler is CFQ.
>
> In the above scenario would you expect the IO nice value of the writes
> done by a task to be propagated all the way down to the disk
> writes? Or
> would they get stripped off at some point?
>
> Thanks,
> Chris
>
> --
> Chris Friesen
> Software Developer
> GENBAND
> chris.friesen@genband.com
> www.genband.com
>
> ---
> ---
> ---
> ---------------------------------------------------------------------
> This SF.net Dev2Dev email is sponsored by:
>
> Show off your parallel programming skills.
> Enter the Intel(R) Threading Challenge 2010.
> http://p.sf.net/sfu/intel-thread-sfd
> _______________________________________________
> fuse-devel mailing list
> fuse-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/fuse-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: ionice and FUSE-based filesystems?
2010-09-02 20:37 ionice and FUSE-based filesystems? Chris Friesen
2010-09-03 17:38 ` [fuse-devel] " Manuel Amador (Rudd-O)
@ 2010-09-03 18:57 ` Jens Axboe
2010-09-03 19:24 ` Chris Friesen
2010-09-03 19:25 ` Chris Friesen
1 sibling, 2 replies; 6+ messages in thread
From: Jens Axboe @ 2010-09-03 18:57 UTC (permalink / raw)
To: Chris Friesen; +Cc: fuse-devel, Linux Kernel Mailing List
On 09/02/2010 10:37 PM, Chris Friesen wrote:
>
> I'm curious about the limits of using ionice with multiple layers of
> filesystems and devices.
>
> In particular, we have a scenario with a FUSE-based filesystem running
> on top of xfs on top of LVM, on top of software RAID, on top of spinning
> disks. (Something like that, anyways.) The IO scheduler is CFQ.
>
> In the above scenario would you expect the IO nice value of the writes
> done by a task to be propagated all the way down to the disk writes? Or
> would they get stripped off at some point?
Miklos should be able to expand on what fuse does, but at least on
the write side priorities will only be carried through for non-buffered
writes with the current design (since actual write out happens out of
context of the submitting application).
--
Jens Axboe
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: ionice and FUSE-based filesystems?
2010-09-03 18:57 ` Jens Axboe
@ 2010-09-03 19:24 ` Chris Friesen
2010-09-03 19:25 ` Chris Friesen
1 sibling, 0 replies; 6+ messages in thread
From: Chris Friesen @ 2010-09-03 19:24 UTC (permalink / raw)
To: Jens Axboe; +Cc: fuse-devel, Linux Kernel Mailing List
On 09/03/2010 12:57 PM, Jens Axboe wrote:
> On 09/02/2010 10:37 PM, Chris Friesen wrote:
>>
>> I'm curious about the limits of using ionice with multiple layers of
>> filesystems and devices.
>>
>> In particular, we have a scenario with a FUSE-based filesystem running
>> on top of xfs on top of LVM, on top of software RAID, on top of spinning
>> disks. (Something like that, anyways.) The IO scheduler is CFQ.
>>
>> In the above scenario would you expect the IO nice value of the writes
>> done by a task to be propagated all the way down to the disk writes? Or
>> would they get stripped off at some point?
>
> Miklos should be able to expand on what fuse does, but at least on
> the write side priorities will only be carried through for non-buffered
> writes with the current design (since actual write out happens out of
> context of the submitting application).
So we're talking either O_SYNC or O_DIRECT only? That seems an
unfortunate limitation given that it then forces the app to block. Has
any thought been given to somehow associating the priority with the
actual operation so that it would affect buffered writes as well?
As for fuse...I was concerned that the addition of userspace tasks to
handle the filesystem operations would result in the I/O operations
taking on the priority of the fuse tasks rather than the originating
task. Or does fuse adjust its I/O nice level according to that of the
incoming requests?
Chris
--
Chris Friesen
Software Developer
GENBAND
chris.friesen@genband.com
www.genband.com
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: ionice and FUSE-based filesystems?
2010-09-03 18:57 ` Jens Axboe
2010-09-03 19:24 ` Chris Friesen
@ 2010-09-03 19:25 ` Chris Friesen
2010-09-08 14:25 ` Jens Axboe
1 sibling, 1 reply; 6+ messages in thread
From: Chris Friesen @ 2010-09-03 19:25 UTC (permalink / raw)
To: Jens Axboe; +Cc: fuse-devel, Linux Kernel Mailing List
On 09/03/2010 12:57 PM, Jens Axboe wrote:
> On 09/02/2010 10:37 PM, Chris Friesen wrote:
>>
>> I'm curious about the limits of using ionice with multiple layers of
>> filesystems and devices.
>>
>> In particular, we have a scenario with a FUSE-based filesystem running
>> on top of xfs on top of LVM, on top of software RAID, on top of spinning
>> disks. (Something like that, anyways.) The IO scheduler is CFQ.
>>
>> In the above scenario would you expect the IO nice value of the writes
>> done by a task to be propagated all the way down to the disk writes? Or
>> would they get stripped off at some point?
>
> Miklos should be able to expand on what fuse does, but at least on
> the write side priorities will only be carried through for non-buffered
> writes with the current design (since actual write out happens out of
> context of the submitting application).
So we're talking either O_SYNC or O_DIRECT only? That seems an
unfortunate limitation given that it then forces the app to block. Has
any thought been given to somehow associating the priority with the
actual operation so that it would affect buffered writes as well?
As for fuse...I was concerned that the addition of userspace tasks to
handle the filesystem operations would result in the I/O operations
taking on the priority of the fuse tasks rather than the originating
task. Or does fuse adjust its I/O nice level according to that of the
incoming requests?
Chris
--
Chris Friesen
Software Developer
GENBAND
chris.friesen@genband.com
www.genband.com
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: ionice and FUSE-based filesystems?
2010-09-03 19:25 ` Chris Friesen
@ 2010-09-08 14:25 ` Jens Axboe
0 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2010-09-08 14:25 UTC (permalink / raw)
To: Chris Friesen; +Cc: fuse-devel, Linux Kernel Mailing List
On 2010-09-03 21:25, Chris Friesen wrote:
> On 09/03/2010 12:57 PM, Jens Axboe wrote:
>> On 09/02/2010 10:37 PM, Chris Friesen wrote:
>>>
>>> I'm curious about the limits of using ionice with multiple layers of
>>> filesystems and devices.
>>>
>>> In particular, we have a scenario with a FUSE-based filesystem running
>>> on top of xfs on top of LVM, on top of software RAID, on top of spinning
>>> disks. (Something like that, anyways.) The IO scheduler is CFQ.
>>>
>>> In the above scenario would you expect the IO nice value of the writes
>>> done by a task to be propagated all the way down to the disk writes? Or
>>> would they get stripped off at some point?
>>
>> Miklos should be able to expand on what fuse does, but at least on
>> the write side priorities will only be carried through for non-buffered
>> writes with the current design (since actual write out happens out of
>> context of the submitting application).
>
> So we're talking either O_SYNC or O_DIRECT only? That seems an
> unfortunate limitation given that it then forces the app to block. Has
> any thought been given to somehow associating the priority with the
> actual operation so that it would affect buffered writes as well?
Yes. Work is progressing to allow to track dirty pages which would then
allow you to use ionice for buffered writes as well.
--
Jens Axboe
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2010-09-08 14:25 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-09-02 20:37 ionice and FUSE-based filesystems? Chris Friesen
2010-09-03 17:38 ` [fuse-devel] " Manuel Amador (Rudd-O)
2010-09-03 18:57 ` Jens Axboe
2010-09-03 19:24 ` Chris Friesen
2010-09-03 19:25 ` Chris Friesen
2010-09-08 14:25 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox