From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B92AFC07E95 for ; Wed, 14 Jul 2021 02:34:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 654D46135A for ; Wed, 14 Jul 2021 02:34:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 654D46135A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=fromorbit.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6472D6B0088; Tue, 13 Jul 2021 22:34:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5772B6B0099; Tue, 13 Jul 2021 22:34:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CAD26B0088; Tue, 13 Jul 2021 22:34:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0062.hostedemail.com [216.40.44.62]) by kanga.kvack.org (Postfix) with ESMTP id DD3C26B0095 for ; Tue, 13 Jul 2021 22:34:51 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DB37A18575244 for ; Wed, 14 Jul 2021 02:34:50 +0000 (UTC) X-FDA: 78359625540.11.F0B9BB2 Received: from mail110.syd.optusnet.com.au (mail110.syd.optusnet.com.au [211.29.132.97]) by imf22.hostedemail.com (Postfix) with ESMTP id 202671908 for ; Wed, 14 Jul 2021 02:34:49 +0000 (UTC) Received: from dread.disaster.area (pa49-181-34-10.pa.nsw.optusnet.com.au [49.181.34.10]) by mail110.syd.optusnet.com.au (Postfix) with ESMTPS id DE0E010A8DC; Wed, 14 Jul 2021 12:34:47 +1000 (AEST) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1m3UjW-006Heg-Rr; Wed, 14 Jul 2021 12:34:46 +1000 Received: from dave by discord.disaster.area with local (Exim 4.94) (envelope-from ) id 1m3UjW-00Awem-Jt; Wed, 14 Jul 2021 12:34:46 +1000 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-mm@kvack.org Subject: [PATCH 2/3] xfs: remove kmem_alloc_io() Date: Wed, 14 Jul 2021 12:34:39 +1000 Message-Id: <20210714023440.2608690-3-david@fromorbit.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210714023440.2608690-1-david@fromorbit.com> References: <20210714023440.2608690-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=F8MpiZpN c=1 sm=1 tr=0 a=hdaoRb6WoHYrV466vVKEyw==:117 a=hdaoRb6WoHYrV466vVKEyw==:17 a=e_q4qTt1xDgA:10 a=20KFwNOVAAAA:8 a=VwQbUJbxAAAA:8 a=4XHAKF2RG0DQCWMB_poA:9 a=AjGcO6oz07-iQ99wixmX:22 Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=none; spf=none (imf22.hostedemail.com: domain of david@fromorbit.com has no SPF policy when checking 211.29.132.97) smtp.mailfrom=david@fromorbit.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 202671908 X-Stat-Signature: ms6gqn8i19iz1p1fj9kdwxxdfbf111id X-HE-Tag: 1626230089-835849 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Chinner Since commit 59bb47985c1d ("mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)"), the core slab code now guarantees slab alignment in all situations sufficient for IO purposes (i.e. minimum of 512 byte alignment of >=3D 512 byte sized heap allocations) we no longer need the workaround in the XFS code to provide this guarantee. Replace the use of kmem_alloc_io() with kmem_alloc() or kmem_alloc_large() appropriately, and remove the kmem_alloc_io() interface altogether. Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong --- fs/xfs/kmem.c | 25 ------------------------- fs/xfs/kmem.h | 1 - fs/xfs/xfs_buf.c | 3 +-- fs/xfs/xfs_buf.h | 6 ------ fs/xfs/xfs_log.c | 3 +-- fs/xfs/xfs_log_recover.c | 4 +--- fs/xfs/xfs_trace.h | 1 - 7 files changed, 3 insertions(+), 40 deletions(-) diff --git a/fs/xfs/kmem.c b/fs/xfs/kmem.c index e986b95d94c9..3f2979fd2f2b 100644 --- a/fs/xfs/kmem.c +++ b/fs/xfs/kmem.c @@ -56,31 +56,6 @@ __kmem_vmalloc(size_t size, xfs_km_flags_t flags) return ptr; } =20 -/* - * Same as kmem_alloc_large, except we guarantee the buffer returned is = aligned - * to the @align_mask. We only guarantee alignment up to page size, we'l= l clamp - * alignment at page size if it is larger. vmalloc always returns a PAGE= _SIZE - * aligned region. - */ -void * -kmem_alloc_io(size_t size, int align_mask, xfs_km_flags_t flags) -{ - void *ptr; - - trace_kmem_alloc_io(size, flags, _RET_IP_); - - if (WARN_ON_ONCE(align_mask >=3D PAGE_SIZE)) - align_mask =3D PAGE_SIZE - 1; - - ptr =3D kmem_alloc(size, flags | KM_MAYFAIL); - if (ptr) { - if (!((uintptr_t)ptr & align_mask)) - return ptr; - kfree(ptr); - } - return __kmem_vmalloc(size, flags); -} - void * kmem_alloc_large(size_t size, xfs_km_flags_t flags) { diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index 38007117697e..9ff20047f8b8 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -57,7 +57,6 @@ kmem_flags_convert(xfs_km_flags_t flags) } =20 extern void *kmem_alloc(size_t, xfs_km_flags_t); -extern void *kmem_alloc_io(size_t size, int align_mask, xfs_km_flags_t f= lags); extern void *kmem_alloc_large(size_t size, xfs_km_flags_t); static inline void kmem_free(const void *ptr) { diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 8ff42b3585e0..a5ef1f9eb622 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -315,7 +315,6 @@ xfs_buf_alloc_kmem( struct xfs_buf *bp, xfs_buf_flags_t flags) { - int align_mask =3D xfs_buftarg_dma_alignment(bp->b_target); xfs_km_flags_t kmflag_mask =3D KM_NOFS; size_t size =3D BBTOB(bp->b_length); =20 @@ -323,7 +322,7 @@ xfs_buf_alloc_kmem( if (!(flags & XBF_READ)) kmflag_mask |=3D KM_ZERO; =20 - bp->b_addr =3D kmem_alloc_io(size, align_mask, kmflag_mask); + bp->b_addr =3D kmem_alloc(size, kmflag_mask); if (!bp->b_addr) return -ENOMEM; =20 diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h index 464dc548fa23..cfbe37d73293 100644 --- a/fs/xfs/xfs_buf.h +++ b/fs/xfs/xfs_buf.h @@ -355,12 +355,6 @@ extern int xfs_setsize_buftarg(struct xfs_buftarg *,= unsigned int); #define xfs_getsize_buftarg(buftarg) block_size((buftarg)->bt_bdev) #define xfs_readonly_buftarg(buftarg) bdev_read_only((buftarg)->bt_bdev) =20 -static inline int -xfs_buftarg_dma_alignment(struct xfs_buftarg *bt) -{ - return queue_dma_alignment(bt->bt_bdev->bd_disk->queue); -} - int xfs_buf_reverify(struct xfs_buf *bp, const struct xfs_buf_ops *ops); bool xfs_verify_magic(struct xfs_buf *bp, __be32 dmagic); bool xfs_verify_magic16(struct xfs_buf *bp, __be16 dmagic); diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index 36fa2650b081..826b3cf5fd72 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -1451,7 +1451,6 @@ xlog_alloc_log( */ ASSERT(log->l_iclog_size >=3D 4096); for (i =3D 0; i < log->l_iclog_bufs; i++) { - int align_mask =3D xfs_buftarg_dma_alignment(mp->m_logdev_targp); size_t bvec_size =3D howmany(log->l_iclog_size, PAGE_SIZE) * sizeof(struct bio_vec); =20 @@ -1463,7 +1462,7 @@ xlog_alloc_log( iclog->ic_prev =3D prev_iclog; prev_iclog =3D iclog; =20 - iclog->ic_data =3D kmem_alloc_io(log->l_iclog_size, align_mask, + iclog->ic_data =3D kmem_alloc_large(log->l_iclog_size, KM_MAYFAIL | KM_ZERO); if (!iclog->ic_data) goto out_free_iclog; diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c index 3c08d495844d..d55fc7caa227 100644 --- a/fs/xfs/xfs_log_recover.c +++ b/fs/xfs/xfs_log_recover.c @@ -79,8 +79,6 @@ xlog_alloc_buffer( struct xlog *log, int nbblks) { - int align_mask =3D xfs_buftarg_dma_alignment(log->l_targ); - /* * Pass log block 0 since we don't have an addr yet, buffer will be * verified on read. @@ -108,7 +106,7 @@ xlog_alloc_buffer( if (nbblks > 1 && log->l_sectBBsize > 1) nbblks +=3D log->l_sectBBsize; nbblks =3D round_up(nbblks, log->l_sectBBsize); - return kmem_alloc_io(BBTOB(nbblks), align_mask, KM_MAYFAIL | KM_ZERO); + return kmem_alloc_large(BBTOB(nbblks), KM_MAYFAIL | KM_ZERO); } =20 /* diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h index f9d8d605f9b1..6865e838a71b 100644 --- a/fs/xfs/xfs_trace.h +++ b/fs/xfs/xfs_trace.h @@ -3689,7 +3689,6 @@ DEFINE_EVENT(xfs_kmem_class, name, \ TP_PROTO(ssize_t size, int flags, unsigned long caller_ip), \ TP_ARGS(size, flags, caller_ip)) DEFINE_KMEM_EVENT(kmem_alloc); -DEFINE_KMEM_EVENT(kmem_alloc_io); DEFINE_KMEM_EVENT(kmem_alloc_large); =20 TRACE_EVENT(xfs_check_new_dalign, --=20 2.31.1