* generic I/O
@ 2011-10-31 16:24 Kai Meyer
0 siblings, 0 replies; 5+ messages in thread
From: Kai Meyer @ 2011-10-31 16:24 UTC (permalink / raw)
To: kernelnewbies
Are there existing generic block device I/O operations available
already? I am familiar with constructing and submitting 'struct bio's,
but what I'd like to do would be greatly simplified if there was an
existing I/O interface similar to the posix 'read' and 'write'
functions. If they don't exist, I would probably end up writing
functions like:
int blk_read(struct block_device *bdev, void *buffer, off_t length);
int blk_write(struct block_device *bdev, void *buffer, off_t length);
Pros and cons to this sort of approach?
-Kai Meyer
^ permalink raw reply [flat|nested] 5+ messages in thread
* Generic I/O
@ 2011-11-14 19:15 Kai Meyer
2011-11-15 18:13 ` michi1 at michaelblizek.twilightparadox.com
0 siblings, 1 reply; 5+ messages in thread
From: Kai Meyer @ 2011-11-14 19:15 UTC (permalink / raw)
To: kernelnewbies
I'm finding it's really simple to write generic I/O functions for block
devices (via a "struct block_device") to mimic the posix read() and
write() functions (I have to supply the position, since I don't have a
fd to keep a position for me, but that's perfectly ok).
I've got a little hack that allows me to run synchronously or
asynchronously, relying on submit_bio() to create the threads for me. My
caller function has an atomic_t value that I set equal to the number of
bios I want to submit. Then I pass a pointer to that atomic_t around to
each of the bios which decrement it in the endio function for that bio.
Then the caller does this:
while(atomic_read(numbios) > 0)
msleep(1);
I'm finding the msleep(1) is a really really really long time,
relatively. It seems to work ok if I just have an empty loop, but it
also seems to me like I'm re-inventing a wheel here. Are there
mechanisms that are better suited for waiting for tasks to complete? Or
even for generic block I/O functions?
-Kai Meyer
^ permalink raw reply [flat|nested] 5+ messages in thread* Generic I/O
2011-11-14 19:15 Generic I/O Kai Meyer
@ 2011-11-15 18:13 ` michi1 at michaelblizek.twilightparadox.com
2011-11-15 18:40 ` Kai Meyer
0 siblings, 1 reply; 5+ messages in thread
From: michi1 at michaelblizek.twilightparadox.com @ 2011-11-15 18:13 UTC (permalink / raw)
To: kernelnewbies
Hi!
On 12:15 Mon 14 Nov , Kai Meyer wrote:
...
> My
> caller function has an atomic_t value that I set equal to the number of
> bios I want to submit. Then I pass a pointer to that atomic_t around to
> each of the bios which decrement it in the endio function for that bio.
>
> Then the caller does this:
> while(atomic_read(numbios) > 0)
> msleep(1);
>
> I'm finding the msleep(1) is a really really really long time,
> relatively. It seems to work ok if I just have an empty loop, but it
> also seems to me like I'm re-inventing a wheel here.
...
You might want to take a look at wait queues (the kernel equivalent to pthread
"condidions"). Basically you instead of calling msleep(), you call
wait_event(). In the function which decrements numbios, you check whether it
is 0 and if so call wake_up().
-Michi
--
programing a layer 3+4 network protocol for mesh networks
see http://michaelblizek.twilightparadox.com
^ permalink raw reply [flat|nested] 5+ messages in thread
* Generic I/O
2011-11-15 18:13 ` michi1 at michaelblizek.twilightparadox.com
@ 2011-11-15 18:40 ` Kai Meyer
2011-11-15 19:12 ` michi1 at michaelblizek.twilightparadox.com
0 siblings, 1 reply; 5+ messages in thread
From: Kai Meyer @ 2011-11-15 18:40 UTC (permalink / raw)
To: kernelnewbies
On 11/15/2011 11:13 AM, michi1 at michaelblizek.twilightparadox.com wrote:
> Hi!
>
> On 12:15 Mon 14 Nov , Kai Meyer wrote:
> ...
>
>> My
>> caller function has an atomic_t value that I set equal to the number of
>> bios I want to submit. Then I pass a pointer to that atomic_t around to
>> each of the bios which decrement it in the endio function for that bio.
>>
>> Then the caller does this:
>> while(atomic_read(numbios)> 0)
>> msleep(1);
>>
>> I'm finding the msleep(1) is a really really really long time,
>> relatively. It seems to work ok if I just have an empty loop, but it
>> also seems to me like I'm re-inventing a wheel here.
> ...
>
> You might want to take a look at wait queues (the kernel equivalent to pthread
> "condidions"). Basically you instead of calling msleep(), you call
> wait_event(). In the function which decrements numbios, you check whether it
> is 0 and if so call wake_up().
>
> -Michi
That sounds very promising. When I read up on wait_event here:
lxr.linux.no/#linux+v2.6.32/include/linux/wait.h#L191
It sounds like it's basically doing the same thing. I would call it like so:
wait_event(wq, atomic_read(numbios) == 0);
To make sure I understand, this seems very much like what I'm doing,
except I'm being woken up every time a bio finishes instead of being
woken up once every millisecond. That is, I'm assuming I would use the
same work queue for all my bios.
During my testing, when I do a lot of disk I/O, I may potentially have
hundreds of threads waiting on anywhere between 1 and 32 bios. Help me
understand the sort of impact you think I might see between having
hundreds waiting for a millisecond, and having hundreds get woken up
each time a bio completes. It seems like it would be very helpful in low
I/O scenarios, especially when there are fast disks involved. I'm
concerned that during heavy I/O loads, I'll be doing a lot of
atomic_reads, and I have the impression that atomic_read isn't the
cheapest operation.
-Kai Meyer
^ permalink raw reply [flat|nested] 5+ messages in thread
* Generic I/O
2011-11-15 18:40 ` Kai Meyer
@ 2011-11-15 19:12 ` michi1 at michaelblizek.twilightparadox.com
0 siblings, 0 replies; 5+ messages in thread
From: michi1 at michaelblizek.twilightparadox.com @ 2011-11-15 19:12 UTC (permalink / raw)
To: kernelnewbies
Hi!
On 11:40 Tue 15 Nov , Kai Meyer wrote:
> On 11/15/2011 11:13 AM, michi1 at michaelblizek.twilightparadox.com wrote:
...
> > You might want to take a look at wait queues (the kernel equivalent to pthread
> > "condidions"). Basically you instead of calling msleep(), you call
> > wait_event(). In the function which decrements numbios, you check whether it
> > is 0 and if so call wake_up().
...
> That sounds very promising. When I read up on wait_event here:
> lxr.linux.no/#linux+v2.6.32/include/linux/wait.h#L191
>
> It sounds like it's basically doing the same thing. I would call it like so:
>
> wait_event(wq, atomic_read(numbios) == 0);
Yes, you dol something like this.
> To make sure I understand, this seems very much like what I'm doing,
> except I'm being woken up every time a bio finishes instead of being
> woken up once every millisecond. That is, I'm assuming I would use the
> same work queue for all my bios.
You are *not* woken up every time you a bio finishes. You are woken up every
time you call wake_up(). You could do something like:
if (atomic_dec_return(numbios) == 0)
wake_up(wp);
> During my testing, when I do a lot of disk I/O, I may potentially have
> hundreds of threads waiting on anywhere between 1 and 32 bios. Help me
> understand the sort of impact you think I might see between having
> hundreds waiting for a millisecond, and having hundreds get woken up
> each time a bio completes. It seems like it would be very helpful in low
> I/O scenarios, especially when there are fast disks involved. I'm
> concerned that during heavy I/O loads, I'll be doing a lot of
> atomic_reads, and I have the impression that atomic_read isn't the
> cheapest operation.
The wakeups might some some overhead. However, I would worry more about
scheduling overhead on smp systems than atomic_read performance.
-Michi
--
programing a layer 3+4 network protocol for mesh networks
see http://michaelblizek.twilightparadox.com
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2011-11-15 19:12 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-10-31 16:24 generic I/O Kai Meyer
-- strict thread matches above, loose matches on Subject: below --
2011-11-14 19:15 Generic I/O Kai Meyer
2011-11-15 18:13 ` michi1 at michaelblizek.twilightparadox.com
2011-11-15 18:40 ` Kai Meyer
2011-11-15 19:12 ` michi1 at michaelblizek.twilightparadox.com
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).