stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: <gregkh@linuxfoundation.org>
To: pasic@linux.vnet.ibm.com, cornelia.huck@de.ibm.com,
	gregkh@linuxfoundation.org, silbe@linux.vnet.ibm.com
Cc: <stable@vger.kernel.org>, <stable-commits@vger.kernel.org>
Subject: Patch "tools/virtio/ringtest: fix run-on-all.sh for offline cpus" has been added to the 4.9-stable tree
Date: Mon, 23 Jan 2017 17:03:28 +0100	[thread overview]
Message-ID: <1485187408197157@kroah.com> (raw)


This is a note to let you know that I've just added the patch titled

    tools/virtio/ringtest: fix run-on-all.sh for offline cpus

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     tools-virtio-ringtest-fix-run-on-all.sh-for-offline-cpus.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From 21f5eda9b8671744539c8295b9df62991fffb2ce Mon Sep 17 00:00:00 2001
From: Halil Pasic <pasic@linux.vnet.ibm.com>
Date: Mon, 29 Aug 2016 18:25:22 +0200
Subject: tools/virtio/ringtest: fix run-on-all.sh for offline cpus

From: Halil Pasic <pasic@linux.vnet.ibm.com>

commit 21f5eda9b8671744539c8295b9df62991fffb2ce upstream.

Since ef1b144d ("tools/virtio/ringtest: fix run-on-all.sh to work
without /dev/cpu") run-on-all.sh uses seq 0 $HOST_AFFINITY as the list
of ids of the CPUs to run the command on (assuming ids of online CPUs
are consecutive and start from 0), where $HOST_AFFINITY is the highest
CPU id in the system previously determined using lscpu.  This can fail
on systems with offline CPUs.

Instead let's use lscpu to determine the list of online CPUs.

Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Fixes: ef1b144d ("tools/virtio/ringtest: fix run-on-all.sh to work without
/dev/cpu")
Reviewed-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 tools/virtio/ringtest/run-on-all.sh |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

--- a/tools/virtio/ringtest/run-on-all.sh
+++ b/tools/virtio/ringtest/run-on-all.sh
@@ -1,12 +1,13 @@
 #!/bin/sh
 
+CPUS_ONLINE=$(lscpu --online -p=cpu|grep -v -e '#')
 #use last CPU for host. Why not the first?
 #many devices tend to use cpu0 by default so
 #it tends to be busier
-HOST_AFFINITY=$(lscpu -p=cpu | tail -1)
+HOST_AFFINITY=$(echo "${CPUS_ONLINE}"|tail -n 1)
 
 #run command on all cpus
-for cpu in $(seq 0 $HOST_AFFINITY)
+for cpu in $CPUS_ONLINE
 do
 	#Don't run guest and host on same CPU
 	#It actually works ok if using signalling


Patches currently in stable-queue which might be from pasic@linux.vnet.ibm.com are

queue-4.9/tools-virtio-ringtest-fix-run-on-all.sh-for-offline-cpus.patch

                 reply	other threads:[~2017-01-23 16:05 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1485187408197157@kroah.com \
    --to=gregkh@linuxfoundation.org \
    --cc=cornelia.huck@de.ibm.com \
    --cc=pasic@linux.vnet.ibm.com \
    --cc=silbe@linux.vnet.ibm.com \
    --cc=stable-commits@vger.kernel.org \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).