From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Monjalon Subject: Re: Recent changes related to interrupt thread Date: Mon, 16 Nov 2015 14:48:42 +0100 Message-ID: <8192567.2fdTdH6sjP@xps13> References: <20151116123200.GA2667@scalar.blr.asicdesigners.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7Bit Cc: dev@dpdk.org, Nirranjan Kirubaharan , Felix Marti , Kumar Sanghvi To: Rahul Lakkireddy Return-path: Received: from mail-wm0-f47.google.com (mail-wm0-f47.google.com [74.125.82.47]) by dpdk.org (Postfix) with ESMTP id 9DE4012A8 for ; Mon, 16 Nov 2015 14:49:59 +0100 (CET) Received: by wmec201 with SMTP id c201so177346759wme.0 for ; Mon, 16 Nov 2015 05:49:58 -0800 (PST) In-Reply-To: <20151116123200.GA2667@scalar.blr.asicdesigners.com> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, 2015-11-16 18:02, Rahul Lakkireddy: > Hi, > > I notice that the following changeset: > > Fixes: fd6949c55c9a ("eal: fix io permission for virtio interrupt > handler") > > has moved the initialization of the interrupt thread to after the master > lcore has been initialized. However, this causes the interrupt thread > to _inherit_ the affinity of the master lcore. Hence, this seems to > make all interrupts to be handled by _only_ the master lcore. Because > of this change, it seems that now alarm interrupts would also be handled > by master lcore only, IIUC. > > We are seeing a performance regression for cxgbe PMD after this commit > since, cxgbe PMD relies on alarm to periodically transmit pending > coalesced packets. > > Also, this perf degradation is only seen if there's a queue allocated > on the master lcore, such as in l3fwd app. If the master lcore has > been skipped, then no degradation in perf is seen since only the alarm > will run on the master lcore. > > So, is the change done to make all interrupts, including alarm > interrupts, be handled by _only_ the master lcore intended? No it was not intended. The idea was to inherit settings (iopl) from the device initialization into the interrupt thread. Though a DPDK driver is not really supposed to rely on interrupt performance. So having interrupts managed on any core was more or less a side effect. > BTW, I have tried setting the affinity to all cpus instead in > eal_intr_init() and this seems to restore the perf back. Perhaps it's > better to move the master lcore initialization to after the interrupt > thread has been initialized as well? Thoughts? Yes, i think it's possible. We can also imagine a command line option to set the interrupt affinity with a default which mimics the old behaviour. In order to make this conversation clearer, and for later references, below is the DPDK init call tree: start driver constructor (if .a) rte_eal_driver_register main rte_eal_init eal_parse_args rte_eal_pci_init rte_eal_memory_init eal_plugins_init dlopen driver constructor (if .so) rte_eal_driver_register eal_thread_init_master eal_thread_set_affinity rte_eal_dev_init driver->init PMD init rte_eth_driver_register rte_eal_intr_init pthread_create eal_intr_thread_main eal_intr_handle_interrupts pthread_create rte_eal_pci_probe driver->devinit rte_eth_dev_init rte_eth_dev_allocate eth_drv->eth_dev_init