From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com,
rob@landley.net, akpm@linux-foundation.org, joerg.roedel@amd.com,
bhelgaas@google.com, shuahkhan@gmail.com,
linux-kernel@vger.kernel.org, devel@linuxdriverproject.org,
x86@kernel.org
Subject: Re: [RFC PATCH 2/7] swiotlb: Make io_tlb_start a physical address instead of a virtual address
Date: Tue, 9 Oct 2012 12:43:45 -0400 [thread overview]
Message-ID: <20121009164344.GC25276@phenom.dumpdata.com> (raw)
In-Reply-To: <506DF022.4030400@intel.com>
On Thu, Oct 04, 2012 at 01:22:58PM -0700, Alexander Duyck wrote:
> On 10/04/2012 10:19 AM, Konrad Rzeszutek Wilk wrote:
> >>>> @@ -450,7 +451,7 @@ void *swiotlb_tbl_map_single(struct device *hwdev, dma_addr_t tbl_dma_addr,
> >>>> io_tlb_list[i] = 0;
> >>>> for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && io_tlb_list[i]; i--)
> >>>> io_tlb_list[i] = ++count;
> >>>> - dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
> >>>> + dma_addr = (char *)phys_to_virt(io_tlb_start) + (index << IO_TLB_SHIFT);
> >>> I think this is going to fall flat with the other user of
> >>> swiotlb_tbl_map_single - Xen SWIOTLB. When it allocates the io_tlb_start
> >>> and does it magic to make sure its under 4GB - the io_tlb_start swath
> >>> of memory, ends up consisting of 2MB chunks of contingous spaces. But each
> >>> chunk is not linearly in the DMA space (thought it is in the CPU space).
> >>>
> >>> Meaning the io_tlb_start region 0-2MB can fall within the DMA address space
> >>> of 2048MB->2032MB, and io_tlb_start offset 2MB->4MB, can fall within 1024MB->1026MB,
> >>> and so on (depending on the availability of memory under 4GB).
> >>>
> >>> There is a clear virt_to_phys(x) != virt_to_dma(x).
> >> Just to be sure I understand you are talking about DMA address space,
> >> not physical address space correct? I am fully aware that DMA address
> >> space can be all over the place. When I was writing the patch set the
> >> big reason why I decided to stop at physical address space was because
> >> DMA address spaces are device specific.
> >>
> >> I understand that virt_to_phys(x) != virt_to_dma(x) for many platforms,
> >> however that is not my assertion. My assertion is (virt_to_phys(x) + y)
> >> == virt_to_phys(x + y). This should be true for any large block of
> >> contiguous memory that is DMA accessible since the CPU and the device
> >> should be able to view the memory in the same layout. If that wasn't
> > That is true mostly for x86 but not all platforms do this.
> >
> >> true I don't think is_swiotlb_buffer would be working correctly since it
> >> is essentially operating on the same assumption prior to my patches.
> > There are two pieces here - the is_swiotlb_buffer and the swiotlb_tbl_[map|unmap]
> > functions.
> >
> > The is_swiotlb_buffer is operating on that principle (and your change
> > to reflect that is OK). The swiotlb_tbl_[*] is not.
> >> If you take a look at patches 4 and 5 I do address changes that end up
> >> needing to be made to Xen SWIOTLB since it makes use of
> >> swiotlb_tbl_map_single. All that I effectively end up changing is that
> >> instead of messing with a void pointer we instead are dealing with a
> >> physical address, and instead of calling xen_virt_to_bus we end up
> >> calling xen_phys_to_bus and thereby drop one extra virt_to_phys call in
> >> the process.
> > Sure that is OK. All of those changes when we bypass the bounce
> > buffer look OK (thought I should double-check again the patch to make
> > sure and also just take it for a little test spin).
>
> I'm interesting in finding out what the results of your test spin are.
Haven't gotten to that yet :-(
>
> > The issue is when we do _use_ the bounce buffer. At that point we
> > run into the allocation from the bounce buffer where the patches
> > assume that the 64MB swath of bounce buffer memory is bus (or DMA)
> > memory contingous. And that is not the case sadly.
>
> I think I understand what you are saying now. However, I don't think
> the issue applies to my patches.
Great.
>
> If I am not mistaken what you are talking about is the pseudo-physical
> memory versus machine memory. I understand the 64MB block is not
> machine-memory contiguous, but it should be pseudo-physical contiguous
> memory. As such using the pseudo-physical addresses instead of virtual
> addresses should function the same way as using true physical addresses
> to replace virtual addresses. All of the physical memory translation to
> machine memory translation is happening in xen_phys_to_bus and all of
> the changes I have made take place before that so the bounce buffers
> should still be working correctly. In addition none of the changes I
OK.
> have made change the bounce buffer boundary assumptions so we should
> have no bounce buffers mapped across the 2MB boundaries.
>
> Thanks,
>
> Alex
>
next prev parent reply other threads:[~2012-10-09 16:55 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-04 0:38 [RFC PATCH 0/7] Improve swiotlb performance by using physical addresses Alexander Duyck
2012-10-04 0:38 ` [RFC PATCH 1/7] swiotlb: Instead of tracking the end of the swiotlb region just calculate it Alexander Duyck
2012-10-04 13:01 ` Konrad Rzeszutek Wilk
2012-10-04 15:54 ` Alexander Duyck
2012-10-04 16:31 ` Konrad Rzeszutek Wilk
2012-10-04 0:38 ` [RFC PATCH 2/7] swiotlb: Make io_tlb_start a physical address instead of a virtual address Alexander Duyck
2012-10-04 13:18 ` Konrad Rzeszutek Wilk
2012-10-04 17:11 ` Alexander Duyck
2012-10-04 17:19 ` Konrad Rzeszutek Wilk
2012-10-04 20:22 ` Alexander Duyck
2012-10-09 16:43 ` Konrad Rzeszutek Wilk [this message]
2012-10-09 19:11 ` Alexander Duyck
2012-10-04 0:38 ` [RFC PATCH 3/7] swiotlb: Make io_tlb_overflow_buffer a physical address Alexander Duyck
2012-10-04 0:39 ` [RFC PATCH 4/7] swiotlb: Return physical addresses when calling swiotlb_tbl_map_single Alexander Duyck
2012-10-04 0:39 ` [RFC PATCH 5/7] swiotlb: Use physical addresses for swiotlb_tbl_unmap_single Alexander Duyck
2012-10-04 0:39 ` [RFC PATCH 6/7] swiotlb: Use physical addresses instead of virtual in swiotlb_tbl_sync_single Alexander Duyck
2012-10-04 0:39 ` [RFC PATCH 7/7] swiotlb: Do not export swiotlb_bounce since there are no external consumers Alexander Duyck
2012-10-04 12:55 ` [RFC PATCH 0/7] Improve swiotlb performance by using physical addresses Konrad Rzeszutek Wilk
2012-10-04 15:50 ` Alexander Duyck
2012-10-04 13:33 ` Konrad Rzeszutek Wilk
2012-10-04 17:57 ` Alexander Duyck
2012-10-05 16:55 ` Andi Kleen
2012-10-05 19:35 ` Alexander Duyck
2012-10-05 20:02 ` Andi Kleen
2012-10-05 23:23 ` Alexander Duyck
2012-10-06 17:57 ` Andi Kleen
2012-10-06 18:56 ` H. Peter Anvin
2012-10-08 15:43 ` Alexander Duyck
2012-10-09 19:05 ` Alexander Duyck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20121009164344.GC25276@phenom.dumpdata.com \
--to=konrad@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=alexander.h.duyck@intel.com \
--cc=bhelgaas@google.com \
--cc=devel@linuxdriverproject.org \
--cc=hpa@zytor.com \
--cc=joerg.roedel@amd.com \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=rob@landley.net \
--cc=shuahkhan@gmail.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).