public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
To: linux-kernel@vger.kernel.org
Cc: kosaki.motohiro@jp.fujitsu.com, roland@kernel.org,
	sean.hefty@intel.com, hal.rosenstock@gmail.com,
	linux-rdma@vger.kernel.org, mike.marciniszyn@qlogic.com,
	ralph.campbell@qlogic.com
Subject: [PATCH] infiniband, qib: convert old cpumask api into new one.
Date: Thu, 19 May 2011 10:07:05 +0900	[thread overview]
Message-ID: <4DD46D39.3060408@jp.fujitsu.com> (raw)

Adapt new APIs. we plan to remove old one later and plan to
change current->cpus_allowed implementation.

No functional change.

Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Roland Dreier <roland@kernel.org>
Cc: Sean Hefty <sean.hefty@intel.com>
Cc: Hal Rosenstock <hal.rosenstock@gmail.com>
Cc: linux-rdma@vger.kernel.org
Cc: Mike Marciniszyn <mike.marciniszyn@qlogic.com>
Cc: Ralph Campbell <ralph.campbell@qlogic.com>
---
 drivers/infiniband/hw/qib/qib_file_ops.c |   11 ++++++-----
 1 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c
index de7d17d..dd9e1a2 100644
--- a/drivers/infiniband/hw/qib/qib_file_ops.c
+++ b/drivers/infiniband/hw/qib/qib_file_ops.c
@@ -1527,6 +1527,7 @@ done_chk_sdma:
 		struct qib_filedata *fd = fp->private_data;
 		const struct qib_ctxtdata *rcd = fd->rcd;
 		const struct qib_devdata *dd = rcd->dd;
+		unsigned int weight;

 		if (dd->flags & QIB_HAS_SEND_DMA) {
 			fd->pq = qib_user_sdma_queue_create(&dd->pcidev->dev,
@@ -1545,8 +1546,8 @@ done_chk_sdma:
 		 * it just means that sooner or later we don't recommend
 		 * a cpu, and let the scheduler do it's best.
 		 */
-		if (!ret && cpus_weight(current->cpus_allowed) >=
-		    qib_cpulist_count) {
+		weight = cpumask_weight(tsk_cpus_allowed(current));
+		if (!ret && weight >= qib_cpulist_count) {
 			int cpu;
 			cpu = find_first_zero_bit(qib_cpulist,
 						  qib_cpulist_count);
@@ -1554,13 +1555,13 @@ done_chk_sdma:
 				__set_bit(cpu, qib_cpulist);
 				fd->rec_cpu_num = cpu;
 			}
-		} else if (cpus_weight(current->cpus_allowed) == 1 &&
-			test_bit(first_cpu(current->cpus_allowed),
+		} else if (weight == 1 &&
+			test_bit(cpumask_first(tsk_cpus_allowed(current)),
 				 qib_cpulist))
 			qib_devinfo(dd->pcidev, "%s PID %u affinity "
 				    "set to cpu %d; already allocated\n",
 				    current->comm, current->pid,
-				    first_cpu(current->cpus_allowed));
+				    cpumask_first(tsk_cpus_allowed(current)));
 	}

 	mutex_unlock(&qib_mutex);
-- 
1.7.3.1



             reply	other threads:[~2011-05-19  1:07 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-19  1:07 KOSAKI Motohiro [this message]
2011-05-31 21:21 ` [PATCH] infiniband, qib: convert old cpumask api into new one Mike Marciniszyn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4DD46D39.3060408@jp.fujitsu.com \
    --to=kosaki.motohiro@jp.fujitsu.com \
    --cc=hal.rosenstock@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mike.marciniszyn@qlogic.com \
    --cc=ralph.campbell@qlogic.com \
    --cc=roland@kernel.org \
    --cc=sean.hefty@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox