From: Mathias Nyman <mathias.nyman@linux.intel.com>
To: Robin Murphy <robin.murphy@arm.com>,
mathias.nyman@intel.com, gregkh@linuxfoundation.org
Cc: linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org,
David.Laight@ACULAB.COM, angelsl@in04.sg
Subject: Re: [PATCH] xhci: Cope with VIA VL805 readahead
Date: Wed, 11 Oct 2017 17:40:32 +0300 [thread overview]
Message-ID: <59DE2D60.1020500@linux.intel.com> (raw)
In-Reply-To: <7116a80607dfa5e9e648de240abef8fa7402cf73.1507658807.git.robin.murphy@arm.com>
On 10.10.2017 21:09, Robin Murphy wrote:
> The VIA VL805 host controller is well-known for causing problems on
> systems with IOMMUs enabled, ranging from triggering endless streams of
> fault messages to locking itself up completely. It appears that the root
> of the problem might be an over-aggressive prefetching of TRBs, wherein
> consuming commands near the end of a queue segment causes it to read off
> the end of the segment, even across a page boundary. This blows up when
> DMA mapping ops are backed by an IOMMU, since there is no guarantee that
> addresses outside the allocated segment are accessible at all.
>
> Some trial-and-error investigation reveals that we can avoid such
> cross-page reads by not using the last few TRBs in a segment; to that
> end, factor out the implicit index of the end-of-segemnt link TRB, and
> implement a quirk to move it slightly further forward when necessary.
>
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
> drivers/usb/host/xhci-mem.c | 32 +++++++++++++++++++-------------
> drivers/usb/host/xhci-pci.c | 5 +++++
> drivers/usb/host/xhci-ring.c | 10 +++++++++-
> drivers/usb/host/xhci.c | 10 +++++-----
> drivers/usb/host/xhci.h | 2 ++
> 5 files changed, 40 insertions(+), 19 deletions(-)
>
Thanks for testing all this.
If possible I'd like to try and find some other solution before chopping the Segment
size to smaller than 256.
I think that your first proposal of adding the guard page to the segment pool could be an option.
other things that could be checked:
- check if we can avoid prefetching over segment by altering the link TRB chain or ioc bits.
- check if we prefetch only over mid-ring link TRBs or also over last link TRB with cycle change.
If only mid ring then we could try to allocate two consecutive pages for the segments.
- check if prefething over segment is related to manually setting the TR dequeue command.
Set TR deq ponter command fushes xHC chached TRBs and might be the cause for the prefetch.
- does VIA VL805 have some odd xhci page size (xhci PAGEZISE register), does it affect anything.
But if nothing else seems to work then chopping the page size could be an option
-Mathias
next prev parent reply other threads:[~2017-10-11 14:36 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-10 18:09 [PATCH] xhci: Cope with VIA VL805 readahead Robin Murphy
2017-10-11 9:23 ` David Laight
2017-10-11 14:40 ` Mathias Nyman [this message]
2017-10-11 15:46 ` David Laight
2017-10-12 8:25 ` Mathias Nyman
2017-10-12 10:35 ` Hao Wei Tee
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=59DE2D60.1020500@linux.intel.com \
--to=mathias.nyman@linux.intel.com \
--cc=David.Laight@ACULAB.COM \
--cc=angelsl@in04.sg \
--cc=gregkh@linuxfoundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-usb@vger.kernel.org \
--cc=mathias.nyman@intel.com \
--cc=robin.murphy@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox