From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03882C3ABD8 for ; Wed, 14 May 2025 20:18:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B7416B009B; Wed, 14 May 2025 16:18:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 33F626B009C; Wed, 14 May 2025 16:18:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1945C6B009D; Wed, 14 May 2025 16:18:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E6A8F6B009B for ; Wed, 14 May 2025 16:18:13 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id C354DBEAC7 for ; Wed, 14 May 2025 20:18:13 +0000 (UTC) X-FDA: 83442625266.24.7463CAA Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf27.hostedemail.com (Postfix) with ESMTP id DAC384000B for ; Wed, 14 May 2025 20:18:11 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=m16478a3; spf=pass (imf27.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747253891; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HjqKQ8it9iGHD3aHKIBxL1LNjs/39+IF0Jhy4BnyUEI=; b=smWsmwICv82s9w7u8q76YStHOkCnHEFz2hJcFTK6riMPGuUrWpgT0ypNiZ3m3Hlh0wK4d4 ivs2ckNPq39QGeDKgnJW4tCOOWp5yWKPy/ep/uI8gZbEztRo12tKOa286LJbUN66Jq01tZ ugA9esHb27kHpbRLtg7F73zAhJ7b//4= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=m16478a3; spf=pass (imf27.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747253891; a=rsa-sha256; cv=none; b=C/j3VRTj84z4pKxgiUkJ7fPeaIYNu3ueD/rWs60Jm5YxlFtM1yAt92UAI4Zgaq8z0mxiUA 0XQgCLhqIsDmEPmb+s3PlHHVDGyiidMKTEyVti2YqRTU+FmwXwyI99X/G7eop4477nKMJA 62scwZHZDARj9Ir2xhvzKo62oPpbj9Y= Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-b074d908e56so70246a12.2 for ; Wed, 14 May 2025 13:18:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747253890; x=1747858690; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=HjqKQ8it9iGHD3aHKIBxL1LNjs/39+IF0Jhy4BnyUEI=; b=m16478a3bQRGiMINu755hbQZX+yRva4bZaJ960Ljpz0xMBNTVsrMfuTKMlUUR6zWnZ CYsixU9NiL8xRIyVEYRp0LKgGl4ATQz7jH/C8ECctZjEc6aqUPf87xUJxLqjBx/+vNKQ hIae6Zs6g2wzaSPZ1zrrk7JegQeqo0/pwo+bhaoqoAy3CvPFsDHNrBcKP/bb2QchqJpm PKOjusQv8LF6UVGD9FGZNZczqL0ve4iLCFuh1ZbFeBTUeAH8fZ7Inz7gHos5MYZsX7j7 fqsrixS5MBrENVveakm+LZlcjmwx3knCHr9e+vQIygEj2srz5s3PTDiuYBpAHljkwS4R HK0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747253890; x=1747858690; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=HjqKQ8it9iGHD3aHKIBxL1LNjs/39+IF0Jhy4BnyUEI=; b=ToXQ2KsKSksRlu9cin55BFp3Fh1MfW6RvhgF5OzHH4aNokDBWmojZBVLwZWV/rnM4/ znPhmYQqz4vHtAL3Rs78X6/uiWoSJk0ACFMm320xFBsEnBde6/3tt6llBofO3TK88LvY 1Liwz5V7C4zV2n8zD6+PYsuNXlNhL1LceuWcjKaq1t25AIW41G7omJH5qbmse62olXVh gIjp8faxza9HHOp+Inz0XYCw0zdNr2QiP34eeysLsM64j4Zvjt5x2t27qTLn42nlGMXV NTSnIlDuW9la0DOX8BwVtuUDIv8ZK7nibLG9S15AM67kxByxpD3I7ttXoTMSW4tS1Xvf NocQ== X-Gm-Message-State: AOJu0YyoiVKQ/aB9qlY0DCExJDJOPC99nr7CkTuopc2AiELeX11K4BV5 9l3m6JRljZ80vV5LaqDv5BPp4A29sIo08h7kQu5P2zz061m87ejDbSD5M2HzIPY= X-Gm-Gg: ASbGncugDzlF1aMz5CtD9qI6d0XC8H37VBJMOjmd51O4XOxisVj8X2RxJ+gTP4Wk/IH F1mJ0FN5quyfkgTi0uGS+ttdiQ7hptgN5tznvqKDgs/FqrU5g/57VB9PLQyPp2iK6GJckn8n/KV R+T0luQZ66zevdzNx44RJLDMSmjSSF17O6OxPTB/6TbX0IVk6JkDwo5b24o2piAOlBnKabVfCov crF76attjTpReypB1FYUtwcA0T+PVdXLKPECsDWTmNls5+6CVxWV99VLMvYD+4ksPefBmJR5sMm Z7GDvOiq1C5fN3qKx9NlpKwlhUPOdGZlw3Odi9tkF3AsZ1AsMQ8UN1/67DyBamJUxTLOH42D X-Google-Smtp-Source: AGHT+IEjHBZfNhTCl7nwF4yGmmc/haZ4cQB9qw7yr4hB8AbNnLUFfiKegNDPcRN5aWHt+BvCEAe97w== X-Received: by 2002:a17:90b:5187:b0:309:fffd:c15a with SMTP id 98e67ed59e1d1-30e2e5bc09cmr8435026a91.13.1747253890067; Wed, 14 May 2025 13:18:10 -0700 (PDT) Received: from KASONG-MC4.tencent.com ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-30e33401934sm2003692a91.9.2025.05.14.13.18.05 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 14 May 2025 13:18:09 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Matthew Wilcox , Hugh Dickins , Chris Li , David Hildenbrand , Yosry Ahmed , "Huang, Ying" , Nhat Pham , Johannes Weiner , Baolin Wang , Baoquan He , Barry Song , Kalesh Singh , Kemeng Shi , Tim Chen , Ryan Roberts , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 06/28] mm, swap: rearrange swap cluster definition and helpers Date: Thu, 15 May 2025 04:17:06 +0800 Message-ID: <20250514201729.48420-7-ryncsn@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250514201729.48420-1-ryncsn@gmail.com> References: <20250514201729.48420-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Queue-Id: DAC384000B X-Rspamd-Server: rspam09 X-Stat-Signature: 3ruf98jtyzwamcmax3qg8xzwmeio1mec X-HE-Tag: 1747253891-159324 X-HE-Meta: U2FsdGVkX1880qY/kkRH8AsJDLGBPUOBAI+1K6P9J89Eu/kdEhd9qS40t0lHxBx1uG+cGcOAOwEBa1xOUHfQv0mae0YVv0gLBYlfoZGNcOEX/U6vrA4zns29rM02vKXUTuZ3rlDXTeqa81hBQC05h2e04Qbrj7elFjN4TQZzqE5wBrRkEqNzHlPS24W2TjchVuRbfuIDy7TH3xg2LhN6hk0GVZDw5XxCohhsGTcRdu0zRA2FrNJXhGihGS4Mn3BtdaLGmmGwSz+w2KKAkIQcRMV9xJkbb/6+xSE3wEX0e3knvKNBfJtZJrgl1IfAytIssvj67qoiDI5RPT5kfTHo3pg3O+5tbyo7ozq4OJIZBvBNa32J/uc1dCFywrX0QeNXaUdjWHRUzYjitFIrRglSNBVlbZJFGvsdEdeczxsW0lFDf6lUt56XzXwU4g0ALfFwmJOImuD2ZuvdPTP/feWjwZYSDeH1nE0xYt11v24sQOb43focaM4AOyab6nHJFXBzhHevVyp2tJw0GSpXf7YRLwGyVQHK8GE7YmYQvlyO/kdPSbPH20Ei1qey/wFX8fo0BfbM3mkQazUf/geuD0Ea4VCoieQzQWDvrj65n42UBEHt4ZGh8MaRjfCXnb7VIbA9CjhaQ+qT1dgdOw0TEjEBK9LqVpIDveo6O57e3o92e58QrTUNObIThDAxTKxAQstcWwzxBsenoJCMFAVvIM5zLNfzWlxuXg2CvJqkPTF09/nxX0O1pKOTZYaPg6EUFq4msw8H6p+vKq+jzYm0oq9Q7pT/6e8vnAGEbGitiSWV7oHH4xLfiyuvJMFNE+YckoiebI/RxCi1+D2ayxrueZQd7vfsKqOafAsHg/Zmbpn5NX1IWVmqIzEQVf4qZS21+ozKd9+ABvblHMn4EuE8dfJwjiYI12hbJoPxwr/YtYrXPhneAF8KqF23v9au0Ql7m8qAHY/od6sfmktZxGZOq99 7lyLa4f1 mn9zUbL8SKnA6dldyXfBa8e3rwxNfmV2wpGmrpWv95UL9wh2yt5n76lMvb6Jp5BBc9MKtKQUHwz4nP9jjdW7pMHdHsfwucMucUxDv9FPK0qpL/MUhZhpdfSFs6ODlufmxTrMRAWxofsHRZLiZU9MSOYrMFcgxQ9KlSzx1PHkoccaudlYpbQyLUajXuh4flnRm1UCsrHjQAt5ypFVns7MLIRcKPhjB2B9EVs4ICJg9czhzP02H5NBOJKNXMWzcGgDqYKBUhi+eFM7K72mo1A5WjSudt5Mu6oPuDxEePJZ6979HCNIzrlxRTTz3IkakgGQPDobtYSLiSUx/Sn2zO+5mPwOHIZcIyjnzow4L8HpTCJiA6K6gvArMwDzR8EOowGmkPTZZI7kaaaPxr1697K+JdswQSq46FwBc/mZfOUPloE+VFgGl5zKm3wdamJcLNgopn7DvEx5QYI5RJkt4O+qB6ZF71fXxcFAajpUxZ2j+25w4npY/zc3x94kyGhJ4FgglOVNdZQFyiWVi5dDptpgTLFmGPzo/S2wf4KgSWliuAKlUrppQNpLmD0uDxFO5rf3QpRjk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song No feature change, move all cluster related definition and helpers to mm/swap.h, also tidy up and add a "swap_" prefix for all cluster lock/unlock helpers, so they can be better used outside of swap files. Signed-off-by: Kairui Song --- include/linux/swap.h | 34 --------------- mm/swap.h | 62 ++++++++++++++++++++++++++ mm/swapfile.c | 102 +++++++++++++------------------------------ 3 files changed, 92 insertions(+), 106 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 0e52ac4e817d..1e7d9d55c39a 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -234,40 +234,6 @@ enum { /* Special value in each swap_map continuation */ #define SWAP_CONT_MAX 0x7f /* Max count */ -/* - * We use this to track usage of a cluster. A cluster is a block of swap disk - * space with SWAPFILE_CLUSTER pages long and naturally aligns in disk. All - * free clusters are organized into a list. We fetch an entry from the list to - * get a free cluster. - * - * The flags field determines if a cluster is free. This is - * protected by cluster lock. - */ -struct swap_cluster_info { - spinlock_t lock; /* - * Protect swap_cluster_info fields - * other than list, and swap_info_struct->swap_map - * elements corresponding to the swap cluster. - */ - u16 count; - u8 flags; - u8 order; - struct list_head list; -}; - -/* All on-list cluster must have a non-zero flag. */ -enum swap_cluster_flags { - CLUSTER_FLAG_NONE = 0, /* For temporary off-list cluster */ - CLUSTER_FLAG_FREE, - CLUSTER_FLAG_NONFULL, - CLUSTER_FLAG_FRAG, - /* Clusters with flags above are allocatable */ - CLUSTER_FLAG_USABLE = CLUSTER_FLAG_FRAG, - CLUSTER_FLAG_FULL, - CLUSTER_FLAG_DISCARD, - CLUSTER_FLAG_MAX, -}; - /* * The first page in the swap file is the swap header, which is always marked * bad to prevent it from being allocated as an entry. This also prevents the diff --git a/mm/swap.h b/mm/swap.h index 34af06bf6fa4..38d37d241f1c 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -5,10 +5,72 @@ struct mempolicy; extern int page_cluster; +#ifdef CONFIG_THP_SWAP +#define SWAPFILE_CLUSTER HPAGE_PMD_NR +#define swap_entry_order(order) (order) +#else +#define SWAPFILE_CLUSTER 256 +#define swap_entry_order(order) 0 +#endif + +/* + * We use this to track usage of a cluster. A cluster is a block of swap disk + * space with SWAPFILE_CLUSTER pages long and naturally aligns in disk. All + * free clusters are organized into a list. We fetch an entry from the list to + * get a free cluster. + * + * The flags field determines if a cluster is free. This is + * protected by cluster lock. + */ +struct swap_cluster_info { + spinlock_t lock; /* + * Protect swap_cluster_info fields + * other than list, and swap_info_struct->swap_map + * elements corresponding to the swap cluster. + */ + u16 count; + u8 flags; + u8 order; + struct list_head list; +}; + +/* All on-list cluster must have a non-zero flag. */ +enum swap_cluster_flags { + CLUSTER_FLAG_NONE = 0, /* For temporary off-list cluster */ + CLUSTER_FLAG_FREE, + CLUSTER_FLAG_NONFULL, + CLUSTER_FLAG_FRAG, + /* Clusters with flags above are allocatable */ + CLUSTER_FLAG_USABLE = CLUSTER_FLAG_FRAG, + CLUSTER_FLAG_FULL, + CLUSTER_FLAG_DISCARD, + CLUSTER_FLAG_MAX, +}; + #ifdef CONFIG_SWAP #include /* for swp_offset */ #include /* for bio_end_io_t */ +static inline struct swap_cluster_info *swp_offset_cluster( + struct swap_info_struct *si, pgoff_t offset) +{ + return &si->cluster_info[offset / SWAPFILE_CLUSTER]; +} + +static inline struct swap_cluster_info *swap_lock_cluster( + struct swap_info_struct *si, + unsigned long offset) +{ + struct swap_cluster_info *ci = swp_offset_cluster(si, offset); + spin_lock(&ci->lock); + return ci; +} + +static inline void swap_unlock_cluster(struct swap_cluster_info *ci) +{ + spin_unlock(&ci->lock); +} + /* linux/mm/page_io.c */ int sio_pool_init(void); struct swap_iocb; diff --git a/mm/swapfile.c b/mm/swapfile.c index aa031fd27847..ba3fd99eb5fa 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -58,9 +58,6 @@ static void swap_entries_free(struct swap_info_struct *si, static void swap_range_alloc(struct swap_info_struct *si, unsigned int nr_entries); static bool folio_swapcache_freeable(struct folio *folio); -static struct swap_cluster_info *lock_cluster(struct swap_info_struct *si, - unsigned long offset); -static inline void unlock_cluster(struct swap_cluster_info *ci); static DEFINE_SPINLOCK(swap_lock); static unsigned int nr_swapfiles; @@ -259,9 +256,9 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si, * swap_map is HAS_CACHE only, which means the slots have no page table * reference or pending writeback, and can't be allocated to others. */ - ci = lock_cluster(si, offset); + ci = swap_lock_cluster(si, offset); need_reclaim = swap_only_has_cache(si, offset, nr_pages); - unlock_cluster(ci); + swap_unlock_cluster(ci); if (!need_reclaim) goto out_unlock; @@ -386,21 +383,6 @@ static void discard_swap_cluster(struct swap_info_struct *si, } } -#ifdef CONFIG_THP_SWAP -#define SWAPFILE_CLUSTER HPAGE_PMD_NR - -#define swap_entry_order(order) (order) -#else -#define SWAPFILE_CLUSTER 256 - -/* - * Define swap_entry_order() as constant to let compiler to optimize - * out some code if !CONFIG_THP_SWAP - */ -#define swap_entry_order(order) 0 -#endif -#define LATENCY_LIMIT 256 - static inline bool cluster_is_empty(struct swap_cluster_info *info) { return info->count == 0; @@ -426,34 +408,12 @@ static inline unsigned int cluster_index(struct swap_info_struct *si, return ci - si->cluster_info; } -static inline struct swap_cluster_info *offset_to_cluster(struct swap_info_struct *si, - unsigned long offset) -{ - return &si->cluster_info[offset / SWAPFILE_CLUSTER]; -} - static inline unsigned int cluster_offset(struct swap_info_struct *si, struct swap_cluster_info *ci) { return cluster_index(si, ci) * SWAPFILE_CLUSTER; } -static inline struct swap_cluster_info *lock_cluster(struct swap_info_struct *si, - unsigned long offset) -{ - struct swap_cluster_info *ci; - - ci = offset_to_cluster(si, offset); - spin_lock(&ci->lock); - - return ci; -} - -static inline void unlock_cluster(struct swap_cluster_info *ci) -{ - spin_unlock(&ci->lock); -} - static void move_cluster(struct swap_info_struct *si, struct swap_cluster_info *ci, struct list_head *list, enum swap_cluster_flags new_flags) @@ -809,7 +769,7 @@ static unsigned int alloc_swap_scan_cluster(struct swap_info_struct *si, } out: relocate_cluster(si, ci); - unlock_cluster(ci); + swap_unlock_cluster(ci); if (si->flags & SWP_SOLIDSTATE) { this_cpu_write(percpu_swap_cluster.offset[order], next); this_cpu_write(percpu_swap_cluster.si[order], si); @@ -853,7 +813,7 @@ static void swap_reclaim_full_clusters(struct swap_info_struct *si, bool force) if (ci->flags == CLUSTER_FLAG_NONE) relocate_cluster(si, ci); - unlock_cluster(ci); + swap_unlock_cluster(ci); if (to_scan <= 0) break; } @@ -889,10 +849,8 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o /* Serialize HDD SWAP allocation for each device. */ spin_lock(&si->global_cluster_lock); offset = si->global_cluster->next[order]; - if (offset == SWAP_ENTRY_INVALID) - goto new_cluster; - ci = lock_cluster(si, offset); + ci = swap_lock_cluster(si, offset); /* Cluster could have been used by another order */ if (cluster_is_usable(ci, order)) { if (cluster_is_empty(ci)) @@ -900,7 +858,7 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o found = alloc_swap_scan_cluster(si, ci, offset, order, usage); } else { - unlock_cluster(ci); + swap_unlock_cluster(ci); } if (found) goto done; @@ -1178,7 +1136,7 @@ static bool swap_alloc_fast(swp_entry_t *entry, if (!si || !offset || !get_swap_device_info(si)) return false; - ci = lock_cluster(si, offset); + ci = swap_lock_cluster(si, offset); if (cluster_is_usable(ci, order)) { if (cluster_is_empty(ci)) offset = cluster_offset(si, ci); @@ -1186,7 +1144,7 @@ static bool swap_alloc_fast(swp_entry_t *entry, if (found) *entry = swp_entry(si->type, found); } else { - unlock_cluster(ci); + swap_unlock_cluster(ci); } put_swap_device(si); @@ -1449,14 +1407,14 @@ static void swap_entries_put_cache(struct swap_info_struct *si, unsigned long offset = swp_offset(entry); struct swap_cluster_info *ci; - ci = lock_cluster(si, offset); - if (swap_only_has_cache(si, offset, nr)) + ci = swap_lock_cluster(si, offset); + if (swap_only_has_cache(si, offset, nr)) { swap_entries_free(si, ci, entry, nr); - else { + } else { for (int i = 0; i < nr; i++, entry.val++) swap_entry_put_locked(si, ci, entry, SWAP_HAS_CACHE); } - unlock_cluster(ci); + swap_unlock_cluster(ci); } static bool swap_entries_put_map(struct swap_info_struct *si, @@ -1474,7 +1432,7 @@ static bool swap_entries_put_map(struct swap_info_struct *si, if (count != 1) goto fallback; - ci = lock_cluster(si, offset); + ci = swap_lock_cluster(si, offset); if (!swap_is_last_map(si, offset, nr, &has_cache)) { goto locked_fallback; } @@ -1483,21 +1441,20 @@ static bool swap_entries_put_map(struct swap_info_struct *si, else for (i = 0; i < nr; i++) WRITE_ONCE(si->swap_map[offset + i], SWAP_HAS_CACHE); - unlock_cluster(ci); + swap_unlock_cluster(ci); return has_cache; fallback: - ci = lock_cluster(si, offset); + ci = swap_lock_cluster(si, offset); locked_fallback: for (i = 0; i < nr; i++, entry.val++) { count = swap_entry_put_locked(si, ci, entry, 1); if (count == SWAP_HAS_CACHE) has_cache = true; } - unlock_cluster(ci); + swap_unlock_cluster(ci); return has_cache; - } /* @@ -1545,7 +1502,7 @@ static void swap_entries_free(struct swap_info_struct *si, unsigned char *map_end = map + nr_pages; /* It should never free entries across different clusters */ - VM_BUG_ON(ci != offset_to_cluster(si, offset + nr_pages - 1)); + VM_BUG_ON(ci != swp_offset_cluster(si, offset + nr_pages - 1)); VM_BUG_ON(cluster_is_empty(ci)); VM_BUG_ON(ci->count < nr_pages); @@ -1620,9 +1577,9 @@ bool swap_entry_swapped(struct swap_info_struct *si, swp_entry_t entry) struct swap_cluster_info *ci; int count; - ci = lock_cluster(si, offset); + ci = swap_lock_cluster(si, offset); count = swap_count(si->swap_map[offset]); - unlock_cluster(ci); + swap_unlock_cluster(ci); return !!count; } @@ -1645,7 +1602,7 @@ int swp_swapcount(swp_entry_t entry) offset = swp_offset(entry); - ci = lock_cluster(si, offset); + ci = swap_lock_cluster(si, offset); count = swap_count(si->swap_map[offset]); if (!(count & COUNT_CONTINUED)) @@ -1668,7 +1625,7 @@ int swp_swapcount(swp_entry_t entry) n *= (SWAP_CONT_MAX + 1); } while (tmp_count & COUNT_CONTINUED); out: - unlock_cluster(ci); + swap_unlock_cluster(ci); return count; } @@ -1683,7 +1640,7 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, int i; bool ret = false; - ci = lock_cluster(si, offset); + ci = swap_lock_cluster(si, offset); if (nr_pages == 1) { if (swap_count(map[roffset])) ret = true; @@ -1696,7 +1653,7 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, } } unlock_out: - unlock_cluster(ci); + swap_unlock_cluster(ci); return ret; } @@ -2246,6 +2203,7 @@ static int unuse_mm(struct mm_struct *mm, unsigned int type) * Return 0 if there are no inuse entries after prev till end of * the map. */ +#define LATENCY_LIMIT 256 static unsigned int find_next_to_unuse(struct swap_info_struct *si, unsigned int prev) { @@ -2629,8 +2587,8 @@ static void wait_for_allocation(struct swap_info_struct *si) BUG_ON(si->flags & SWP_WRITEOK); for (offset = 0; offset < end; offset += SWAPFILE_CLUSTER) { - ci = lock_cluster(si, offset); - unlock_cluster(ci); + ci = swap_lock_cluster(si, offset); + swap_unlock_cluster(ci); } } @@ -3533,7 +3491,7 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr) offset = swp_offset(entry); VM_WARN_ON(nr > SWAPFILE_CLUSTER - offset % SWAPFILE_CLUSTER); - ci = lock_cluster(si, offset); + ci = swap_lock_cluster(si, offset); err = 0; for (i = 0; i < nr; i++) { @@ -3588,7 +3546,7 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr) } unlock_out: - unlock_cluster(ci); + swap_unlock_cluster(ci); return err; } @@ -3688,7 +3646,7 @@ int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask) offset = swp_offset(entry); - ci = lock_cluster(si, offset); + ci = swap_lock_cluster(si, offset); count = swap_count(si->swap_map[offset]); @@ -3748,7 +3706,7 @@ int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask) out_unlock_cont: spin_unlock(&si->cont_lock); out: - unlock_cluster(ci); + swap_unlock_cluster(ci); put_swap_device(si); outer: if (page) -- 2.49.0