From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ECF6EC54E60 for ; Tue, 19 Mar 2024 08:28:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xBMEhwGeoPKW6FI4d+TqKpesyyaSSUk7fjGfVjfWBTM=; b=Qg7u9GoMIDDrS7 OuBMO9NBwbnEmd8wz0o/3p1DL1lrazN67jkhYYvhP63+CRKsko7gU782NWyszK+z9g/j0hXEdD+Hr r0b4vvDEf+kaC0IwjPOIifsa0TCVAprcPSD0ThKzWIzWgfmyzmHuarcMCk77Qixjv2Ci/v7JXBqGc 5b8L2RyGeBdBIK/ELIa1BDkDRQXXulhirmQ8MzwYmsjFmiGBlLiEa25DAiTSfT3uDN6FK6n2whcAp Vt8OMZfOQOmMt//nKAN29o/TXEhNFzwmqqvhgPuWG/ylBteOM8hsuQv1Ph2ZRmBElvqgavfiDaoNK wYb5RThpWmvgYd6dPWog==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rmUpZ-0000000Bum7-0Pf9; Tue, 19 Mar 2024 08:28:21 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rmUpV-0000000BujQ-2Cbw for linux-arm-kernel@lists.infradead.org; Tue, 19 Mar 2024 08:28:19 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710836896; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=CB0QtuwNQuGoMD7qtCWhc1XdgMcxMk46b8pSVshJge8=; b=JhzpgrA/RxV2fqtu8anZ5d/3SxLTXtUcyiwZm0EWlN2aIlwSfCAf/bmQMasGSVVxV/iDjV pH2Eb/FYB26NlvWf34yHnh1x1tF6riX54pX9MFDQRPHY4WwfuctEaBHE9Op1+Rdy9+ADB8 TumKzpDkrMZs9V96lWqo5vlhZ/bhd18= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-303-t7KJHrVGMZaB6P8uHQ3KZg-1; Tue, 19 Mar 2024 04:28:14 -0400 X-MC-Unique: t7KJHrVGMZaB6P8uHQ3KZg-1 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-4140225e68aso18808765e9.1 for ; Tue, 19 Mar 2024 01:28:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710836893; x=1711441693; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=CB0QtuwNQuGoMD7qtCWhc1XdgMcxMk46b8pSVshJge8=; b=w1CH+b5vDFLZVaqe1PXdh1iimCuvrW7uFMoWSA8pAHJvwVCKNLUjVBSMcVEaOrAy6q f/AbUDg9zBXIg1gdxjfm4ZMJ8nLmcL+Z/GJ9qCqlQY/fQuzqxuhpGtCwYPB81+aip3Bj F/13zznWTm3QhduIJyB+LEqItoZqbjrWtKd/Roq8M2V+sEvKvgbRkF5OQsEQhZiWgaJP /s26r5TL5jnlcE8PjoGYmlTbZIxTq1OBvIfHtCCnUBZQu3g9/kmMzJlG5CHUWVGYOKMf y9/El9zEoL5vdHXSCEpK85qgitxwUgJJaCw08Ow8fOi5BYed7PouZiX5jItPnwaH0eJZ /++w== X-Forwarded-Encrypted: i=1; AJvYcCUSRWDqQBzyTI9qIqH7GEwxHmpdk5BBOJZzt985WaH5T7a9m3Ckr2sKzX9TnzSFxvnsFQ81FYYjUlxBjKoDSrJ9RH4w9fezcVyFszrz8J2CDObYjOA= X-Gm-Message-State: AOJu0YzPHQldq3ZUUu3c8TN7nn0lH4Ys4BdZXDsr28cuqpmvILr4Yjaz Bp4UGfX2Ipn5/Agd1Js6UtQI/kfvl1PF+fiUO4dHEt28R8lOkXBkjk3rmINGUVt6Jt2wiRwgwzP yK1R2hQpK10sxUWs4Yw10dDduOT4NdOOlnVsePZeWUuz9B4vV0+NwrJNvlnr0PcyKtieUje1j X-Received: by 2002:a05:600c:358a:b0:414:22b5:c33a with SMTP id p10-20020a05600c358a00b0041422b5c33amr2168377wmq.1.1710836893038; Tue, 19 Mar 2024 01:28:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGqzZ3RSqI5aruFYvzTdmxlY7UT5zKZE8Kju5PdfFuBjeOEHAE4vfQF7dVW+EegfweCjpcAiA== X-Received: by 2002:a05:600c:358a:b0:414:22b5:c33a with SMTP id p10-20020a05600c358a00b0041422b5c33amr2168349wmq.1.1710836892497; Tue, 19 Mar 2024 01:28:12 -0700 (PDT) Received: from redhat.com ([2.52.6.254]) by smtp.gmail.com with ESMTPSA id w9-20020a05600c474900b0041408af4b34sm9936953wmo.10.2024.03.19.01.28.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Mar 2024 01:28:11 -0700 (PDT) Date: Tue, 19 Mar 2024 04:28:08 -0400 From: "Michael S. Tsirkin" To: Gavin Shan Cc: Will Deacon , virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, jasowang@redhat.com, xuanzhuo@linux.alibaba.com, yihyu@redhat.com, shan.gavin@gmail.com, linux-arm-kernel@lists.infradead.org, Catalin Marinas , mochs@nvidia.com Subject: Re: [PATCH] virtio_ring: Fix the stale index in available ring Message-ID: <20240319034110-mutt-send-email-mst@kernel.org> References: <20240314074923.426688-1-gshan@redhat.com> <20240318165924.GA1824@willie-the-truck> <35a6bcef-27cf-4626-a41d-9ec0a338fe28@redhat.com> <20240319020905-mutt-send-email-mst@kernel.org> <20240319020949-mutt-send-email-mst@kernel.org> <6b829cfc-9cbe-42eb-9935-62d2cf5fbcc4@redhat.com> MIME-Version: 1.0 In-Reply-To: <6b829cfc-9cbe-42eb-9935-62d2cf5fbcc4@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240319_012817_859231_0AE09A94 X-CRM114-Status: GOOD ( 43.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Mar 19, 2024 at 04:54:15PM +1000, Gavin Shan wrote: > On 3/19/24 16:10, Michael S. Tsirkin wrote: > > On Tue, Mar 19, 2024 at 02:09:34AM -0400, Michael S. Tsirkin wrote: > > > On Tue, Mar 19, 2024 at 02:59:23PM +1000, Gavin Shan wrote: > > > > On 3/19/24 02:59, Will Deacon wrote: > [...] > > > > > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c > > > > > > index 49299b1f9ec7..7d852811c912 100644 > > > > > > --- a/drivers/virtio/virtio_ring.c > > > > > > +++ b/drivers/virtio/virtio_ring.c > > > > > > @@ -687,9 +687,15 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, > > > > > > avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1); > > > > > > vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); > > > > > > - /* Descriptors and available array need to be set before we expose the > > > > > > - * new available array entries. */ > > > > > > - virtio_wmb(vq->weak_barriers); > > > > > > + /* > > > > > > + * Descriptors and available array need to be set before we expose > > > > > > + * the new available array entries. virtio_wmb() should be enough > > > > > > + * to ensuere the order theoretically. However, a stronger barrier > > > > > > + * is needed by ARM64. Otherwise, the stale data can be observed > > > > > > + * by the host (vhost). A stronger barrier should work for other > > > > > > + * architectures, but performance loss is expected. > > > > > > + */ > > > > > > + virtio_mb(false); > > > > > > vq->split.avail_idx_shadow++; > > > > > > vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev, > > > > > > vq->split.avail_idx_shadow); > > > > > > > > > > Replacing a DMB with a DSB is _very_ unlikely to be the correct solution > > > > > here, especially when ordering accesses to coherent memory. > > > > > > > > > > In practice, either the larger timing different from the DSB or the fact > > > > > that you're going from a Store->Store barrier to a full barrier is what > > > > > makes things "work" for you. Have you tried, for example, a DMB SY > > > > > (e.g. via __smb_mb()). > > > > > > > > > > We definitely shouldn't take changes like this without a proper > > > > > explanation of what is going on. > > > > > > > > > > > > > Thanks for your comments, Will. > > > > > > > > Yes, DMB should work for us. However, it seems this instruction has issues on > > > > NVidia's grace-hopper. It's hard for me to understand how DMB and DSB works > > > > from hardware level. I agree it's not the solution to replace DMB with DSB > > > > before we fully understand the root cause. > > > > > > > > I tried the possible replacement like below. __smp_mb() can avoid the issue like > > > > __mb() does. __ndelay(10) can avoid the issue, but __ndelay(9) doesn't. > > > > > > > > static inline int virtqueue_add_split(struct virtqueue *_vq, ...) > > > > { > > > > : > > > > /* Put entry in available array (but don't update avail->idx until they > > > > * do sync). */ > > > > avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1); > > > > vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); > > > > > > > > /* Descriptors and available array need to be set before we expose the > > > > * new available array entries. */ > > > > // Broken: virtio_wmb(vq->weak_barriers); > > > > // Broken: __dma_mb(); > > > > // Work: __mb(); > > > > // Work: __smp_mb(); > > > > Did you try __smp_wmb ? And wmb? > > > > virtio_wmb(false) is equivalent to __smb_wmb(), which is broken. > __wmb() works either. No issue found with it. So this is arch/arm64/include/asm/barrier.h:#define __smp_wmb() dmb(ishst) versus arch/arm64/include/asm/barrier.h:#define __wmb() dsb(st) right? Really interesting. And you are saying dma_wmb does not work either: arch/arm64/include/asm/barrier.h:#define __dma_wmb() dmb(oshst) Really strange. However I found this: https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Memory-attributes/Cacheable-and-shareable-memory-attributes Going by this picture, all CPUs are in the innner shareable domain so ishst should be enough to synchronize, right? However, there are two points that give me pause here: Inner shareable This represents a shareability domain that can be shared by multiple processors, but not necessarily all of the agents in the system. A system might have multiple Inner Shareable domains. An operation that affects one Inner Shareable domain does not affect other Inner Shareable domains in the system. An example of such a domain might be a quad-core Cortex-A57 cluster. Point 1 - so is it possible that there are multiple inner shareable domains in this system? With vhost running inside one and guest inside another? Anyone knows if that is the case on nvidia grace hopper and how to find out? Outer shareable An outer shareable (OSH) domain re-order is shared by multiple agents and can consist of one or more inner shareable domains. An operation that affects an outer shareable domain also implicitly affects all inner shareable domains inside it. However, it does not otherwise behave as an inner shareable operation. I do not get this last sentence. If it affects all inner domains then how it "does not behave as an inner shareable operation"? > > > > > // Work: __ndelay(100); > > > > // Work: __ndelay(10); > > > > // Broken: __ndelay(9); > > > > > > > > vq->split.avail_idx_shadow++; > > > > vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev, > > > > vq->split.avail_idx_shadow); > > > > > > What if you stick __ndelay here? > > > > And keep virtio_wmb above? > > > > The result has been shared through a separate reply. > > > > > > > > vq->num_added++; > > > > > > > > pr_debug("Added buffer head %i to %p\n", head, vq); > > > > END_USE(vq); > > > > : > > > > } > > > > > > > > I also tried to measure the consumed time for various barrier-relative instructions using > > > > ktime_get_ns() which should have consumed most of the time. __smb_mb() is slower than > > > > __smp_wmb() but faster than __mb() > > > > > > > > Instruction Range of used time in ns > > > > ---------------------------------------------- > > > > __smp_wmb() [32 1128032] > > > > __smp_mb() [32 1160096] > > > > __mb() [32 1162496] > > > > > > Thanks, > Gavin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel