Linux block layer
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: "chenxiang (M)" <chenxiang66@hisilicon.com>
Cc: lkml@sdf.org, tglx@linutronix.de, kbusch@kernel.org,
	"axboe@kernel.dk" <axboe@kernel.dk>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Linuxarm <linuxarm@huawei.com>,
	John Garry <john.garry@huawei.com>
Subject: Re: The irq Affinity is changed after the patch(Fixes: b1a5a73e64e9 ("genirq/affinity: Spread vectors on node according to nr_cpu ratio"))
Date: Tue, 19 Nov 2019 11:17:00 +0800	[thread overview]
Message-ID: <20191119031700.GE391@ming.t460p> (raw)
In-Reply-To: <a8a89884-8323-ff70-f35e-0fcf5d7afefc@hisilicon.com>

[-- Attachment #1: Type: text/plain, Size: 2257 bytes --]

On Tue, Nov 19, 2019 at 11:05:55AM +0800, chenxiang (M) wrote:
> Hi Ming,
> 
> 在 2019/11/19 9:42, Ming Lei 写道:
> > On Tue, Nov 19, 2019 at 09:25:30AM +0800, chenxiang (M) wrote:
> > > Hi,
> > > 
> > > There are 128 cpus and 16 irqs for SAS controller in my system, and there
> > > are 4 Nodes, every 32 cpus are for one node (cpu0-31 for node0, cpu32-63 for
> > > node1, cpu64-95 for node2, cpu96-127 for node3).
> > > We use function pci_alloc_irq_vectors_affinity() to set the affinity of
> > > irqs.
> > > 
> > > I find that  before the patch (Fixes: b1a5a73e64e9 ("genirq/affinity: Spread
> > > vectors on node according to nr_cpu ratio")), the relationship between irqs
> > > and cpus is: irq0 bind to cpu0-7, irq1 bind to cpu8-15,
> > > irq2 bind to cpu16-23, irq3 bind to cpu24-31,irq4 bind to cpu32-39... irq15
> > > bind to cpu120-127. But after the patch, the relationship is changed: irq0
> > > bind to cpu32-39,
> > > irq1 bind to cpu40-47, ..., irq11 bind to cpu120-127, irq12 bind to cpu0-7,
> > > irq13 bind to cpu8-15, irq14 bind to cpu16-23, irq15 bind to cpu24-31.
> > > 
> > > I notice that before calling the sort() in function alloc_nodes_vectors(),
> > > the id of array node_vectors[] is from 0,1,2,3. But after function sort(),
> > > the index of array node_vectors[] is 1,2,3,0.
> > > But i think it sorts according to the numbers of cpus in those nodes, so it
> > > should be the same as before calling sort() as the numbers of cpus in every
> > > node are 32.
> > Maybe there are more non-present CPUs covered by node 0.
> > 
> > Could you provide the following log?
> > 
> > 1) lscpu
> > 
> > 2) ./dump-io-irq-affinity $PCI_ID_SAS
> > 
> > 	http://people.redhat.com/minlei/tests/tools/dump-io-irq-affinity
> > 
> > You need to figure out the PCI ID(the 1st column of lspci output) of the SAS
> > controller via lspci.
> 
> Sorry, I can't access the link you provide, but i can provide those irqs'
> affinity in the attachment.
> I also write a small testcase, and find id is 1, 2, 3, 0 after calling
> sort() .

Runtime log from /proc/interrupts isn't useful for investigating
affinity allocation issue, please use the attached script for collecting
log.


Thanks,
Ming

[-- Attachment #2: dump-io-irq-affinity --]
[-- Type: text/plain, Size: 980 bytes --]

#!/bin/sh

get_disk_from_pcid()
{
	PCID=$1

	DISKS=`find /sys/block -name "*"`
	for DISK in $DISKS; do
		DISKP=`realpath $DISK/device`
		echo $DISKP | grep $PCID > /dev/null
		[ $? -eq 0 ] && echo `basename $DISK` && break
	done
}

dump_irq_affinity()
{
	PCID=$1
	PCIP=`find /sys/devices -name *$PCID | grep pci`

	[[ ! -d $PCIP/msi_irqs ]] && return

	IRQS=`ls $PCIP/msi_irqs`

	[ $? -ne 0 ] && return

	DISK=`get_disk_from_pcid $PCID`
	echo "PCI name is $PCID: $DISK"

	for IRQ in $IRQS; do
	    CPUS=`cat /proc/irq/$IRQ/smp_affinity_list`
	    ECPUS=`cat /proc/irq/$IRQ/effective_affinity_list`
	    echo -e "\tirq $IRQ, cpu list $CPUS, effective list $ECPUS"
	done
}


if [ $# -ge 1 ]; then
	PCIDS=$1
else
#	PCID=`lspci | grep "Non-Volatile memory" | cut -c1-7`
	PCIDS=`lspci | grep "Non-Volatile memory controller" | awk '{print $1}'`
fi

echo "kernel version: "
uname -a

for PCID in $PCIDS; do
	dump_irq_affinity $PCID
done

  parent reply	other threads:[~2019-11-19  3:17 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-19  1:25 The irq Affinity is changed after the patch(Fixes: b1a5a73e64e9 ("genirq/affinity: Spread vectors on node according to nr_cpu ratio")) chenxiang (M)
2019-11-19  1:42 ` Ming Lei
     [not found]   ` <a8a89884-8323-ff70-f35e-0fcf5d7afefc@hisilicon.com>
2019-11-19  3:17     ` Ming Lei [this message]
2019-11-19  3:32       ` chenxiang (M)
2019-11-19  6:56         ` Ming Lei
2019-12-08  7:42     ` George Spelvin
2019-12-09  2:58       ` chenxiang (M)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191119031700.GE391@ming.t460p \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=chenxiang66@hisilicon.com \
    --cc=john.garry@huawei.com \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=lkml@sdf.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox