linux-next.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Roland Dreier <rdreier@cisco.com>
To: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: sfr@canb.auug.org.au, linux-next@vger.kernel.org,
	linux-kernel@vger.kernel.org, general@lists.openfabrics.org
Subject: Re: [ofa-general] Re: linux-next: origin tree build warning
Date: Mon, 22 Jun 2009 23:10:17 -0700	[thread overview]
Message-ID: <adatz27pjjq.fsf@cisco.com> (raw)
In-Reply-To: <20090622121054K.fujita.tomonori@lab.ntt.co.jp> (FUJITA Tomonori's message of "Mon, 22 Jun 2009 12:11:12 +0900")

Thanks, I queued up:

commit 99987bea474ceca8ec6fb05f81d7d188634cdffd
Author: Roland Dreier <rolandd@cisco.com>
Date:   Mon Jun 22 23:04:13 2009 -0700

    IB/mthca: Replace dma_sync_single() use with proper functions
    
    dma_sync_single() is deprecated now, and the use in mthca is wrong:
    there should be a dma_sync_single_for_cpu() before touching the memory
    from the CPU, and a dma_sync_single_for_device() afterwards.  Fix
    this, prompted by a kick in the pants from a patch from FUJITA
    Tomonori <fujita.tomonori@lab.ntt.co.jp>.
    
    Signed-off-by: Roland Dreier <rolandd@cisco.com>

diff --git a/drivers/infiniband/hw/mthca/mthca_mr.c b/drivers/infiniband/hw/mthca/mthca_mr.c
index d606edf..065b208 100644
--- a/drivers/infiniband/hw/mthca/mthca_mr.c
+++ b/drivers/infiniband/hw/mthca/mthca_mr.c
@@ -352,10 +352,14 @@ static void mthca_arbel_write_mtt_seg(struct mthca_dev *dev,
 
 	BUG_ON(!mtts);
 
+	dma_sync_single_for_cpu(&dev->pdev->dev, dma_handle,
+				list_len * sizeof (u64), DMA_TO_DEVICE);
+
 	for (i = 0; i < list_len; ++i)
 		mtts[i] = cpu_to_be64(buffer_list[i] | MTHCA_MTT_FLAG_PRESENT);
 
-	dma_sync_single(&dev->pdev->dev, dma_handle, list_len * sizeof (u64), DMA_TO_DEVICE);
+	dma_sync_single_for_device(&dev->pdev->dev, dma_handle,
+				   list_len * sizeof (u64), DMA_TO_DEVICE);
 }
 
 int mthca_write_mtt(struct mthca_dev *dev, struct mthca_mtt *mtt,
@@ -803,12 +807,15 @@ int mthca_arbel_map_phys_fmr(struct ib_fmr *ibfmr, u64 *page_list,
 
 	wmb();
 
+	dma_sync_single_for_cpu(&dev->pdev->dev, fmr->mem.arbel.dma_handle,
+				list_len * sizeof(u64), DMA_TO_DEVICE);
+
 	for (i = 0; i < list_len; ++i)
 		fmr->mem.arbel.mtts[i] = cpu_to_be64(page_list[i] |
 						     MTHCA_MTT_FLAG_PRESENT);
 
-	dma_sync_single(&dev->pdev->dev, fmr->mem.arbel.dma_handle,
-			list_len * sizeof(u64), DMA_TO_DEVICE);
+	dma_sync_single_for_device(&dev->pdev->dev, fmr->mem.arbel.dma_handle,
+				   list_len * sizeof(u64), DMA_TO_DEVICE);
 
 	fmr->mem.arbel.mpt->key    = cpu_to_be32(key);
 	fmr->mem.arbel.mpt->lkey   = cpu_to_be32(key);

      reply	other threads:[~2009-06-23  6:10 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-06-22  1:01 [ofa-general] linux-next: origin tree build warning Stephen Rothwell
2009-06-22  3:11 ` FUJITA Tomonori
2009-06-23  6:10   ` Roland Dreier [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=adatz27pjjq.fsf@cisco.com \
    --to=rdreier@cisco.com \
    --cc=fujita.tomonori@lab.ntt.co.jp \
    --cc=general@lists.openfabrics.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-next@vger.kernel.org \
    --cc=sfr@canb.auug.org.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).