From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A1A91386DB; Tue, 27 Feb 2024 10:37:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709030248; cv=none; b=e2HB7ZEgoM39YltPn05WXNxbXCu7t/YLApAQfbh0JD8HBA5QTK9Zqe3imnY7T2eyCeDGfxK92ozjtkiyYqsfcaadcM+lxOBc09xfhHQPmGI5JIel2hhuExbDN1z2OgqHTs8HBFtdNXXxK/d3u0VAgiFRaz+5O1rgNaCLUkXHPKo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709030248; c=relaxed/simple; bh=UopZteKsviPxCNjSny9cAIZSGOCRhXG4d51EkbeY/J4=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=XCPGvybBcpSehkwsbeW3ETUjqOvQJh2Dqv4c9EHJ3dDL+9eQR8MMKYYpRcaFsYa/gS4TWuGJLVjU2kEB1cM8DUgm3cZ1qzxi2JXnqRDAAGGRDWvftTrLFgNP2T12enj11+B9zikic0oK5fjBVb5uSqb8Fm91geDs32tRAaFncrg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id CAD64C433F1; Tue, 27 Feb 2024 10:37:20 +0000 (UTC) Date: Tue, 27 Feb 2024 10:37:18 +0000 From: Catalin Marinas To: Jason Gunthorpe Cc: Alexander Gordeev , Andrew Morton , Christian Borntraeger , Borislav Petkov , Dave Hansen , "David S. Miller" , Eric Dumazet , Gerald Schaefer , Vasily Gorbik , Heiko Carstens , "H. Peter Anvin" , Justin Stitt , Jakub Kicinski , Leon Romanovsky , linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, llvm@lists.linux.dev, Ingo Molnar , Bill Wendling , Nathan Chancellor , Nick Desaulniers , netdev@vger.kernel.org, Paolo Abeni , Salil Mehta , Jijie Shao , Sven Schnelle , Thomas Gleixner , x86@kernel.org, Yisen Zhuang , Arnd Bergmann , Leon Romanovsky , linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Mark Rutland , Michael Guralnik , patches@lists.linux.dev, Niklas Schnelle , Will Deacon Subject: Re: [PATCH 4/6] arm64/io: Provide a WC friendly __iowriteXX_copy() Message-ID: References: <0-v1-38290193eace+5-mlx5_arm_wc_jgg@nvidia.com> <4-v1-38290193eace+5-mlx5_arm_wc_jgg@nvidia.com> Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4-v1-38290193eace+5-mlx5_arm_wc_jgg@nvidia.com> On Tue, Feb 20, 2024 at 09:17:08PM -0400, Jason Gunthorpe wrote: > +/* > + * This generates a memcpy that works on a from/to address which is aligned to > + * bits. Count is in terms of the number of bits sized quantities to copy. It > + * optimizes to use the STR groupings when possible so that it is WC friendly. > + */ > +#define memcpy_toio_aligned(to, from, count, bits) \ > + ({ \ > + volatile u##bits __iomem *_to = to; \ > + const u##bits *_from = from; \ > + size_t _count = count; \ > + const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \ > + \ > + for (; _from < _end_from; _from += 8, _to += 8) \ > + __const_memcpy_toio_aligned##bits(_to, _from, 8); \ > + if ((_count % 8) >= 4) { \ > + __const_memcpy_toio_aligned##bits(_to, _from, 4); \ > + _from += 4; \ > + _to += 4; \ > + } \ > + if ((_count % 4) >= 2) { \ > + __const_memcpy_toio_aligned##bits(_to, _from, 2); \ > + _from += 2; \ > + _to += 2; \ > + } \ > + if (_count % 2) \ > + __const_memcpy_toio_aligned##bits(_to, _from, 1); \ > + }) Do we actually need all this if count is not constant? If it's not performance critical anywhere, I'd rather copy the generic implementation, it's easier to read. Otherwise, apart from the __raw_writeq() typo that Will mentioned, the patch looks fine to me. -- Catalin