From: "René Rebe" <rene@exactcode.de>
To: Pete Zaitcev <zaitcev@redhat.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: MAX_USBFS_BUFFER_SIZE
Date: Fri, 3 Mar 2006 08:27:45 +0100 [thread overview]
Message-ID: <200603030827.46003.rene@exactcode.de> (raw)
In-Reply-To: <20060302130519.588b18a2.zaitcev@redhat.com>
Hi,
On Thursday 02 March 2006 22:05, Pete Zaitcev wrote:
> On Wed, 1 Mar 2006 22:42:35 +0100, René Rebe <rene@exactcode.de> wrote:
>
> > > > drivers/usb/core/devio.c:86
> > > > #define MAX_USBFS_BUFFER_SIZE 16384
>
> > So, queing alot URBs is the recommended way to sustain the bus? Allowing
> > way bigger buffers will not be realistic?
>
> Have you ever considered how many TDs have to be allocated to transfer
> a data buffer this big? No, seriously. If your application cannot deliver
> the tranfer speeds with 16KB URBs, we ought to consider if the combination
> of our USB stack, usbfs, libusb and the application ought to get serious
> performance enhancing surgery. The problem is obviously in the software
> overhead.
As I already wrote, queing multiple URBs in parallel solved the problem for me.
I'll post the libusb patch later. So the problem just was time of no pending
URBs wasted a lot of time slots where no URB was exchanged with the scanner.
Queueing N = size / 16k URBs in parallel gets the maximal possible thruput with
the scanner - a 2x speedup. The driver is now even slightly faster than the
vendor Windows one by about 20%.
For even further improvements a _async interface would be needed in libusb
(and sanei_usb) so I can queue the prologue and epilogue URBs of the protocol
of communication into the kernel and thus elleminate some more wasted time
slots. I estimate that the driver would then be over 30% faster compared with
the Windows one.
Yours,
--
René Rebe - Rubensstr. 64 - 12157 Berlin (Europe / Germany)
http://www.exactcode.de | http://www.t2-project.org
+49 (0)30 255 897 45
next prev parent reply other threads:[~2006-03-03 7:25 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-03-01 20:16 MAX_USBFS_BUFFER_SIZE René Rebe
2006-03-01 20:53 ` MAX_USBFS_BUFFER_SIZE Oliver Neukum
2006-03-01 21:32 ` MAX_USBFS_BUFFER_SIZE Greg KH
2006-03-01 21:42 ` MAX_USBFS_BUFFER_SIZE René Rebe
2006-03-01 21:54 ` MAX_USBFS_BUFFER_SIZE Greg KH
2006-03-01 22:34 ` MAX_USBFS_BUFFER_SIZE Olivier Galibert
2006-03-01 22:41 ` MAX_USBFS_BUFFER_SIZE Greg KH
2006-03-01 23:25 ` MAX_USBFS_BUFFER_SIZE Olivier Galibert
2006-03-01 23:37 ` MAX_USBFS_BUFFER_SIZE Greg KH
2006-03-02 9:04 ` MAX_USBFS_BUFFER_SIZE René Rebe
2006-03-02 16:47 ` MAX_USBFS_BUFFER_SIZE Greg KH
2006-03-02 16:03 ` MAX_USBFS_BUFFER_SIZE René Rebe
2006-03-02 16:47 ` MAX_USBFS_BUFFER_SIZE Greg KH
2006-03-01 21:59 ` MAX_USBFS_BUFFER_SIZE Duncan Sands
2006-03-03 10:34 ` MAX_USBFS_BUFFER_SIZE Oliver Neukum
[not found] ` <mailman.1141249502.22706.linux-kernel2news@redhat.com>
2006-03-02 21:05 ` MAX_USBFS_BUFFER_SIZE Pete Zaitcev
2006-03-03 7:27 ` René Rebe [this message]
2006-03-03 20:32 ` MAX_USBFS_BUFFER_SIZE Greg KH
2006-03-03 8:12 ` MAX_USBFS_BUFFER_SIZE Duncan Sands
2006-03-03 10:29 ` MAX_USBFS_BUFFER_SIZE Oliver Neukum
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200603030827.46003.rene@exactcode.de \
--to=rene@exactcode.de \
--cc=linux-kernel@vger.kernel.org \
--cc=zaitcev@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox