From: Brian King <brking@linux.vnet.ibm.com>
To: Robert Jennings <rcj@linux.vnet.ibm.com>
Cc: linuxppc-dev@ozlabs.org
Subject: Re: [PATCH] powerpc: Correct VIO bus accounting problem in CMO env.
Date: Wed, 28 Jan 2009 08:47:21 -0600 [thread overview]
Message-ID: <49806FF9.9060307@linux.vnet.ibm.com> (raw)
In-Reply-To: <20090122194000.GA14767@austin.ibm.com>
Acked by: Brian King <brking@linux.vnet.ibm.com>
Robert Jennings wrote:
> In the VIO bus code the wrappers for dma alloc_coherent and free_coherent
> calls are rounding to IOMMU_PAGE_SIZE. Taking a look at the underlying
> calls, the actual mapping is promoted to PAGE_SIZE. Changing the
> rounding in these two functions fixes under-reporting the entitlement
> used by the system. Without this change, the system could run out of
> entitlement before it believes it has and incur mapping failures at the
> firmware level.
>
> Also in the VIO bus code, the wrapper for dma map_sg is not exiting in
> an error path where it should. Rather than fall through to code for the
> success case, this patch adds the return that is needed in the error path.
>
> Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
>
> ---
> arch/powerpc/kernel/vio.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> Index: b/arch/powerpc/kernel/vio.c
> ===================================================================
> --- a/arch/powerpc/kernel/vio.c
> +++ b/arch/powerpc/kernel/vio.c
> @@ -492,14 +492,14 @@ static void *vio_dma_iommu_alloc_coheren
> struct vio_dev *viodev = to_vio_dev(dev);
> void *ret;
>
> - if (vio_cmo_alloc(viodev, roundup(size, IOMMU_PAGE_SIZE))) {
> + if (vio_cmo_alloc(viodev, roundup(size, PAGE_SIZE))) {
> atomic_inc(&viodev->cmo.allocs_failed);
> return NULL;
> }
>
> ret = dma_iommu_ops.alloc_coherent(dev, size, dma_handle, flag);
> if (unlikely(ret == NULL)) {
> - vio_cmo_dealloc(viodev, roundup(size, IOMMU_PAGE_SIZE));
> + vio_cmo_dealloc(viodev, roundup(size, PAGE_SIZE));
> atomic_inc(&viodev->cmo.allocs_failed);
> }
>
> @@ -513,7 +513,7 @@ static void vio_dma_iommu_free_coherent(
>
> dma_iommu_ops.free_coherent(dev, size, vaddr, dma_handle);
>
> - vio_cmo_dealloc(viodev, roundup(size, IOMMU_PAGE_SIZE));
> + vio_cmo_dealloc(viodev, roundup(size, PAGE_SIZE));
> }
>
> static dma_addr_t vio_dma_iommu_map_page(struct device *dev, struct page *page,
> @@ -572,6 +572,7 @@ static int vio_dma_iommu_map_sg(struct d
> if (unlikely(!ret)) {
> vio_cmo_dealloc(viodev, alloc_size);
> atomic_inc(&viodev->cmo.allocs_failed);
> + return ret;
> }
>
> for (sgl = sglist, count = 0; count < ret; count++, sgl++)
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@ozlabs.org
> https://ozlabs.org/mailman/listinfo/linuxppc-dev
--
Brian King
Linux on Power Virtualization
IBM Linux Technology Center
prev parent reply other threads:[~2009-01-28 14:48 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-01-22 19:40 [PATCH] powerpc: Correct VIO bus accounting problem in CMO env Robert Jennings
2009-01-28 14:47 ` Brian King [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49806FF9.9060307@linux.vnet.ibm.com \
--to=brking@linux.vnet.ibm.com \
--cc=linuxppc-dev@ozlabs.org \
--cc=rcj@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).