From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E6CFFC433F5 for ; Thu, 27 Jan 2022 09:09:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=751s2kisUE6Geei8JU13Tp3VZxx9Rz/0+tz88++N0mI=; b=ESJ0pcZ+eGLdje bsnqjVEPYL4Bha75rNGXfPbTvHow/GvGhVpB2p4YdnFsCKboxWrd8i+lNwWlsF6ZxvbwaolQhBJU1 z3rYxq6//HoGugv/WzvpAcMfNqUyGidQrd+MYtETI7WU8MMI+HT2WioOI/hs7uHxlCjY8WvpmICFO B15VysiAI8KIWEUqt8nOw03BP8fTn52OFuYdqrlA6bqRLd4OsSonbUr0XC8C1hDGB6VrHYugzGErz 0JP2f8DuAvKZowWgv5jcgOvTFWsRSuKj/FKJ/J7B8oKTG5fk6T7IBaaRsbCelykoP0g2SZ0JstBxs UqO+p9rJWdQly1TFpnHw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nD0lK-00EoQF-KK; Thu, 27 Jan 2022 09:08:14 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nD0lG-00EoP7-Tu for linux-arm-kernel@lists.infradead.org; Thu, 27 Jan 2022 09:08:12 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 91F781FB; Thu, 27 Jan 2022 01:08:07 -0800 (PST) Received: from e120937-lin (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D56AB3F7D8; Thu, 27 Jan 2022 01:08:05 -0800 (PST) Date: Thu, 27 Jan 2022 09:07:59 +0000 From: Cristian Marussi To: Peter Hilber Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, igor.skalkin@opensynergy.com, "Michael S. Tsirkin" , virtualization@lists.linux-foundation.org Subject: Re: [PATCH 1/6] firmware: arm_scmi: Add atomic mode support to virtio transport Message-ID: <20220127090759.GA5776@e120937-lin> References: <20220124100341.41191-1-cristian.marussi@arm.com> <20220124100341.41191-2-cristian.marussi@arm.com> <425e9a2b-a03f-a038-2598-33f28cd5f4e9@opensynergy.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <425e9a2b-a03f-a038-2598-33f28cd5f4e9@opensynergy.com> User-Agent: Mutt/1.9.4 (2018-02-28) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220127_010811_101037_EEA7E8D6 X-CRM114-Status: GOOD ( 36.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Jan 26, 2022 at 03:28:52PM +0100, Peter Hilber wrote: > On 24.01.22 11:03, Cristian Marussi wrote: > > Add support for .mark_txdone and .poll_done transport operations to SCMI > > VirtIO transport as pre-requisites to enable atomic operations. > > > > Add a Kernel configuration option to enable SCMI VirtIO transport polling > > and atomic mode for selected SCMI transactions while leaving it default > > disabled. > > > > Hi Cristian, > Hi Peter, > please see one remark below. > > Best regards, > > Peter > > > Cc: "Michael S. Tsirkin" > > Cc: Igor Skalkin > > Cc: Peter Hilber > > Cc: virtualization@lists.linux-foundation.org > > Signed-off-by: Cristian Marussi > > --- [snip] > > +/** > > + * virtio_poll_done - Provide polling support for VirtIO transport > > + * > > + * @cinfo: SCMI channel info > > + * @xfer: Reference to the transfer being poll for. > > + * > > + * VirtIO core provides a polling mechanism based only on last used indexes: > > + * this means that it is possible to poll the virtqueues waiting for something > > + * new to arrive from the host side but the only way to check if the freshly > > + * arrived buffer was what we were waiting for is to compare the newly arrived > > + * message descriptors with the one we are polling on. > > + * > > + * As a consequence it can happen to dequeue something different from the buffer > > + * we were poll-waiting for: if that is the case such early fetched buffers are > > + * then added to a the @pending_cmds_list list for later processing by a > > + * dedicated deferred worker. > > + * > > + * So, basically, once something new is spotted we proceed to de-queue all the > > + * freshly received used buffers until we found the one we were polling on, or, > > + * we have 'seemingly' emptied the virtqueue; if some buffers are still pending > > + * in the vqueue at the end of the polling loop (possible due to inherent races > > + * in virtqueues handling mechanisms), we similarly kick the deferred worker > > + * and let it process those, to avoid indefinitely looping in the .poll_done > > + * helper. > > + * > > + * Note that, since we do NOT have per-message suppress notification mechanism, > > + * the message we are polling for could be delivered via usual IRQs callbacks > > + * on another core which happened to have IRQs enabled: in such case it will be > > + * handled as such by scmi_rx_callback() and the polling loop in the > > + * SCMI Core TX path will be transparently terminated anyway. > > + * > > + * Return: True once polling has successfully completed. > > + */ > > +static bool virtio_poll_done(struct scmi_chan_info *cinfo, > > + struct scmi_xfer *xfer) > > +{ > > + bool pending, ret = false; > > + unsigned int length, any_prefetched = 0; > > + unsigned long flags; > > + struct scmi_vio_msg *next_msg, *msg = xfer->priv; > > + struct scmi_vio_channel *vioch = cinfo->transport_info; > > + > > + if (!msg) > > + return true; > > + > > + spin_lock_irqsave(&vioch->lock, flags); > > If now acquiring vioch->lock here, I see no need to virtqueue_poll() any more. > After checking msg->poll_status, we could just directly try virtqueue_get_buf(). > > On the other hand, always taking the vioch->lock in a busy loop might better be > avoided (I assumed before that taking it was omitted on purpose), since it > might hamper tx channel progress in other cores (but I'm not sure about the > actual impact). > > Also, I don't yet understand why the vioch->lock would need to be taken here. There was a race I could reproduce between the below check against VIO_MSG_POLL_DONE and the poll_idx later update near the end of the poll loop where another thread could have set VIO_MSG_POLL_DONE after this thread had checked it and then this same thread would have cleared it rewriting the new poll_idx; so at first I needlessly enlanrged the spinlocked section (even though I knew was subtopimal given virtqueue_poll does not need serialization) and then forget to properly review this thing. BUT now that, following your suggestion, I introduced a dedicated poll_status that race is gone, so I shrinked back the spinlocked section as before and works fine (even poll_idx itself does not need to be protected really given it can be accessed only here) I'll post the fix in -rc2 together with the core change in the virtio-core I proposed last week to Michael (if not too costly as perfs) Thanks, Cristian _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel