From: Steve Rottinger <steve@pentek.com>
To: Leon Woestenberg <leon.woestenberg@gmail.com>
Cc: linux-kernel@vger.kernel.org, Jens Axboe <jens.axboe@oracle.com>
Subject: Re: splice methods in character device driver
Date: Fri, 12 Jun 2009 16:45:15 -0400 [thread overview]
Message-ID: <4A32BE5B.7080503@pentek.com> (raw)
In-Reply-To: <c384c5ea0906121221p6a20d92fg6716f560164c46bf@mail.gmail.com>
Hi Leon,
It does seem like a lot of code needs to be executed to move a small
chunk of data.
Although, I think that most of the overhead that I was experiencing
came from the cumulative
overhead of each splice system call. I increased my pipe size using
Jens' pipe size patch,
from 16 to 256 pages, and this had a huge effect -- the speed of my
transfers more than doubled.
Pipe sizes larger that 256 pages, cause my kernel to crash.
I'm doing about 300MB/s to my hardware RAID, running two instances of my
splice() copy application
(One on each RAID channel). I would like to combine the two RAID
channels using a software RAID 0;
however, splice, even from /dev/zero runs horribly slow to a software
RAID device. I'd be curious
to know if anyone else has tried this?
-Steve
Leon Woestenberg wrote:
> Steve, Jens,
>
> another few questions:
>
> On Thu, Jun 4, 2009 at 3:20 PM, Steve Rottinger<steve@pentek.com> wrote:
>
>> ...
>> - The performance is poor, and much slower than transferring directly from
>> main memory with O_DIRECT. I suspect that this has a lot to do with
>> large amount of systems calls required to move the data, since each call moves only
>> 64K. Maybe I'll try increasing the pipe size, next.
>>
>> Once I get past these issues, and I get the code in a better state, I'll
>> be happy to share what
>> I can.
>>
>>
> I've been experimenting a bit using mostly-empty functions to learn
> understand the function call flow:
>
> splice_from_pipe(pipe, out, ppos, len, flags, pipe_to_device);
> pipe_to_device(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
> struct splice_desc *sd)
>
> So some back-of-a-coaster calculations:
>
> If I understand correctly, a pipe_buffer never spans more than one
> page (typically 4kB).
>
> The SPLICE_BUFFERS is 16, thus splice_from_pipe() is called every 64kB.
>
> The actor "pipe_to_device" is called on each pipe_buffer, so for every 4kB.
>
> For my case, I have a DMA engine that does say 200 MB/s, resulting in
> 50000 actor calls per second.
>
> As my use case would be to splice from an acquisition card to disk,
> splice() made an interesting approach.
>
> However, if the above is correct, I assume splice() is not meant for
> my use-case?
>
>
> Regards,
>
> Leon.
>
>
>
>
>
>
> /* the actor which takes pages from the pipe to the device
> *
> * it must move a single struct pipe_buffer to the desired destination
> * Existing implementations are pipe_to_file, pipe_to_sendpage, pipe_to_user.
> */
> static int pipe_to_device(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
> struct splice_desc *sd)
> {
> int rc;
> printk(KERN_DEBUG "pipe_to_device(buf->offset=%d, sd->len=%d)\n",
> buf->offset, sd->len);
> /* make sure the data in this buffer is up-to-date */
> rc = buf->ops->confirm(pipe, buf);
> if (unlikely(rc))
> return rc;
> /* create a transfer for this buffer */
>
> }
> /* kernel wants to write from the pipe into our file at ppos */
> ssize_t splice_write(struct pipe_inode_info *pipe, struct file *out,
> loff_t *ppos, size_t len, unsigned int flags)
> {
> int ret;
> printk(KERN_DEBUG "splice_write(len=%d)\n", len);
> ret = splice_from_pipe(pipe, out, ppos, len, flags, pipe_to_device);
> return 0;
> }
>
>
next prev parent reply other threads:[~2009-06-12 20:45 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-05-11 14:40 splice methods in character device driver Steve Rottinger
2009-05-11 19:22 ` Jens Axboe
2009-05-13 16:59 ` Steve Rottinger
2009-06-03 21:32 ` Leon Woestenberg
2009-06-04 7:32 ` Jens Axboe
2009-06-04 13:20 ` Steve Rottinger
2009-06-12 19:21 ` Leon Woestenberg
2009-06-12 19:59 ` Jens Axboe
2009-06-12 20:45 ` Steve Rottinger [this message]
2009-06-16 11:59 ` Jens Axboe
2009-06-16 15:06 ` Steve Rottinger
2009-06-16 18:24 ` [RFC][PATCH] add support for shrinking/growing a pipe (Was "Re: splice methods in character device driver") Jens Axboe
2009-06-16 18:28 ` splice methods in character device driver Jens Axboe
2009-06-06 21:25 ` Leon Woestenberg
2009-06-08 7:05 ` Jens Axboe
2009-06-12 22:05 ` Leon Woestenberg
2009-06-13 7:26 ` Jens Axboe
2009-06-13 20:04 ` Leon Woestenberg
2009-06-16 11:57 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A32BE5B.7080503@pentek.com \
--to=steve@pentek.com \
--cc=jens.axboe@oracle.com \
--cc=leon.woestenberg@gmail.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox