From: Marcelo Tosatti <mtosatti@redhat.com>
To: Nitesh Narayan Lal <nitesh@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org,
frederic@kernel.org, juri.lelli@redhat.com, abelits@marvell.com,
bhelgaas@google.com, linux-pci@vger.kernel.org,
rostedt@goodmis.org, mingo@kernel.org, peterz@infradead.org,
tglx@linutronix.de
Subject: Re: [PATCH v1 0/3] Preventing job distribution to isolated CPUs
Date: Tue, 16 Jun 2020 14:26:06 -0300 [thread overview]
Message-ID: <20200616172606.GA326441@fuller.cnet> (raw)
In-Reply-To: <20200610161226.424337-1-nitesh@redhat.com>
Hi Nitesh,
On Wed, Jun 10, 2020 at 12:12:23PM -0400, Nitesh Narayan Lal wrote:
> This patch-set is originated from one of the patches that have been
> posted earlier as a part of "Task_isolation" mode [1] patch series
> by Alex Belits <abelits@marvell.com>. There are only a couple of
> changes that I am proposing in this patch-set compared to what Alex
> has posted earlier.
>
>
> Context
> =======
> On a broad level, all three patches that are included in this patch
> set are meant to improve the driver/library to respect isolated
> CPUs by not pinning any job on it. Not doing so could impact
> the latency values in RT use-cases.
>
>
> Patches
> =======
> * Patch1:
> The first patch is meant to make cpumask_local_spread()
> aware of the isolated CPUs. It ensures that the CPUs that
> are returned by this API only includes housekeeping CPUs.
>
> * Patch2:
> This patch ensures that a probe function that is called
> using work_on_cpu() doesn't run any task on an isolated CPU.
>
> * Patch3:
> This patch makes store_rps_map() aware of the isolated
> CPUs so that rps don't queue any jobs on an isolated CPU.
>
>
> Changes
> =======
> To fix the above-mentioned issues Alex has used housekeeping_cpumask().
> The only changes that I am proposing here are:
> - Removing the dependency on CONFIG_TASK_ISOLATION that was proposed by Alex.
> As it should be safe to rely on housekeeping_cpumask()
> even when we don't have any isolated CPUs and we want
> to fall back to using all available CPUs in any of the above scenarios.
> - Using both HK_FLAG_DOMAIN and HK_FLAG_WQ in all three patches, this is
> because we would want the above fixes not only when we have isolcpus but
> also with something like systemd's CPU affinity.
>
>
> Testing
> =======
> * Patch 1:
> Fix for cpumask_local_spread() is tested by creating VFs, loading
> iavf module and by adding a tracepoint to confirm that only housekeeping
> CPUs are picked when an appropriate profile is set up and all remaining CPUs
> when no CPU isolation is required/configured.
>
> * Patch 2:
> To test the PCI fix, I hotplugged a virtio-net-pci from qemu console
> and forced its addition to a specific node to trigger the code path that
> includes the proposed fix and verified that only housekeeping CPUs
> are included via tracepoint. I understand that this may not be the
> best way to test it, hence, I am open to any suggestion to test this
> fix in a better way if required.
>
> * Patch 3:
> To test the fix in store_rps_map(), I tried configuring an isolated
> CPU by writing to /sys/class/net/en*/queues/rx*/rps_cpus which
> resulted in 'write error: Invalid argument' error. For the case
> where a non-isolated CPU is writing in rps_cpus the above operation
> succeeded without any error.
>
> [1] https://patchwork.ozlabs.org/project/netdev/patch/51102eebe62336c6a4e584c7a503553b9f90e01c.camel@marvell.com/
>
> Alex Belits (3):
> lib: restricting cpumask_local_spread to only houskeeping CPUs
> PCI: prevent work_on_cpu's probe to execute on isolated CPUs
> net: restrict queuing of receive packets to housekeeping CPUs
>
> drivers/pci/pci-driver.c | 5 ++++-
> lib/cpumask.c | 43 +++++++++++++++++++++++-----------------
> net/core/net-sysfs.c | 10 +++++++++-
> 3 files changed, 38 insertions(+), 20 deletions(-)
>
> --
>
Looks good to me.
The flags mechanism is not well organized: this is using HK_FLAG_WQ to
infer nohz_full is being set (while HK_FLAG_WQ should indicate that
non-affined workqueue threads should not run on certain CPUs).
But this is a problem of the flags (which apparently Frederic wants
to fix by exposing a limited number of options to users), and not
of this patch.
prev parent reply other threads:[~2020-06-16 17:26 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-10 16:12 [PATCH v1 0/3] Preventing job distribution to isolated CPUs Nitesh Narayan Lal
2020-06-10 16:12 ` [Patch v1 1/3] lib: restricting cpumask_local_spread to only houskeeping CPUs Nitesh Narayan Lal
2020-06-10 16:12 ` [Patch v1 2/3] PCI: prevent work_on_cpu's probe to execute on isolated CPUs Nitesh Narayan Lal
2020-06-16 20:05 ` Bjorn Helgaas
2020-06-16 22:03 ` Nitesh Narayan Lal
2020-06-16 23:22 ` Frederic Weisbecker
2020-06-10 16:12 ` [Patch v1 3/3] net: restrict queuing of receive packets to housekeeping CPUs Nitesh Narayan Lal
2020-06-16 17:26 ` Marcelo Tosatti [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200616172606.GA326441@fuller.cnet \
--to=mtosatti@redhat.com \
--cc=abelits@marvell.com \
--cc=bhelgaas@google.com \
--cc=frederic@kernel.org \
--cc=juri.lelli@redhat.com \
--cc=linux-api@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=nitesh@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).