From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Dongxiao Xu <dongxiao.xu@intel.com>,
xen-devel@lists.xen.org, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg hook
Date: Fri, 7 Dec 2012 09:11:00 -0500 [thread overview]
Message-ID: <20121207141100.GD3140@phenom.dumpdata.com> (raw)
In-Reply-To: <50C0ADB502000078000AE9DA@nat28.tlf.novell.com>
On Thu, Dec 06, 2012 at 01:37:41PM +0000, Jan Beulich wrote:
> >>> On 06.12.12 at 14:08, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > While mapping sg buffers, checking to cross page DMA buffer is
> > also needed. If the guest DMA buffer crosses page boundary, Xen
> > should exchange contiguous memory for it.
> >
> > Besides, it is needed to backup the original page contents
> > and copy it back after memory exchange is done.
> >
> > This fixes issues if device DMA into software static buffers,
> > and in case the static buffer cross page boundary which pages are
> > not contiguous in real hardware.
> >
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > ---
> > drivers/xen/swiotlb-xen.c | 47
> > ++++++++++++++++++++++++++++++++++++++++++++-
> > 1 files changed, 46 insertions(+), 1 deletions(-)
> >
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 58db6df..e8f0cfb 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -461,6 +461,22 @@ xen_swiotlb_sync_single_for_device(struct device *hwdev,
> > dma_addr_t dev_addr,
> > }
> > EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
> >
> > +static bool
> > +check_continguous_region(unsigned long vstart, unsigned long order)
>
> check_continguous_region(unsigned long vstart, unsigned int order)
>
> But - why do you need to do this check order based in the first
> place? Checking the actual length of the buffer should suffice.
>
> > +{
> > + unsigned long prev_ma = xen_virt_to_bus((void *)vstart);
> > + unsigned long next_ma;
>
> phys_addr_t or some such for both of them.
>
> > + int i;
>
> unsigned long
>
> > +
> > + for (i = 1; i < (1 << order); i++) {
>
> 1UL
>
> > + next_ma = xen_virt_to_bus((void *)(vstart + i * PAGE_SIZE));
> > + if (next_ma != prev_ma + PAGE_SIZE)
> > + return false;
> > + prev_ma = next_ma;
> > + }
> > + return true;
> > +}
> > +
> > /*
> > * Map a set of buffers described by scatterlist in streaming mode for DMA.
> > * This is the scatter-gather version of the above xen_swiotlb_map_page
> > @@ -489,7 +505,36 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct
> > scatterlist *sgl,
> >
> > for_each_sg(sgl, sg, nelems, i) {
> > phys_addr_t paddr = sg_phys(sg);
> > - dma_addr_t dev_addr = xen_phys_to_bus(paddr);
> > + unsigned long vstart, order;
> > + dma_addr_t dev_addr;
> > +
> > + /*
> > + * While mapping sg buffers, checking to cross page DMA buffer
> > + * is also needed. If the guest DMA buffer crosses page
> > + * boundary, Xen should exchange contiguous memory for it.
> > + * Besides, it is needed to backup the original page contents
> > + * and copy it back after memory exchange is done.
> > + */
> > + if (range_straddles_page_boundary(paddr, sg->length)) {
> > + vstart = (unsigned long)__va(paddr & PAGE_MASK);
> > + order = get_order(sg->length + (paddr & ~PAGE_MASK));
> > + if (!check_continguous_region(vstart, order)) {
> > + unsigned long buf;
> > + buf = __get_free_pages(GFP_KERNEL, order);
> > + memcpy((void *)buf, (void *)vstart,
> > + PAGE_SIZE * (1 << order));
> > + if (xen_create_contiguous_region(vstart, order,
> > + fls64(paddr))) {
> > + free_pages(buf, order);
> > + return 0;
> > + }
> > + memcpy((void *)vstart, (void *)buf,
> > + PAGE_SIZE * (1 << order));
> > + free_pages(buf, order);
> > + }
> > + }
> > +
> > + dev_addr = xen_phys_to_bus(paddr);
> >
> > if (swiotlb_force ||
> > !dma_capable(hwdev, dev_addr, sg->length) ||
>
> How about swiotlb_map_page() (for the compound page case)?
Heh. Thanks - I just got to your reply now and had the same question.
Interestingly enough - this looks like a problem that has been forever
and nobody ever hit this.
Worst, the problem is even present if a driver uses the pci_alloc_coherent
and asks for a 3MB region or such - as we can at most give out only
2MB swaths.
>
> Jan
>
next prev parent reply other threads:[~2012-12-07 14:11 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-12-06 13:08 [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg hook Dongxiao Xu
2012-12-06 13:37 ` [Xen-devel] " Jan Beulich
2012-12-07 14:11 ` Konrad Rzeszutek Wilk [this message]
2012-12-07 14:08 ` Konrad Rzeszutek Wilk
2012-12-11 6:39 ` Xu, Dongxiao
2012-12-11 17:06 ` Konrad Rzeszutek Wilk
2012-12-12 1:03 ` Xu, Dongxiao
2012-12-12 9:38 ` [Xen-devel] " Jan Beulich
2012-12-19 20:09 ` Konrad Rzeszutek Wilk
2012-12-20 1:23 ` Xu, Dongxiao
2012-12-20 8:56 ` Jan Beulich
2013-01-07 7:17 ` Xu, Dongxiao
2013-01-07 8:46 ` Jan Beulich
2013-01-07 15:55 ` Konrad Rzeszutek Wilk
2012-12-13 16:34 ` Konrad Rzeszutek Wilk
-- strict thread matches above, loose matches on Subject: below --
2012-12-11 6:27 [Xen-devel] " Xu, Dongxiao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20121207141100.GD3140@phenom.dumpdata.com \
--to=konrad.wilk@oracle.com \
--cc=JBeulich@suse.com \
--cc=dongxiao.xu@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).