From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4DCB1337684; Wed, 10 Sep 2025 18:07:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.156.1 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757527676; cv=none; b=CxkVDU4TMGLLLuiNKVgN+6j8gk7GtJmagU+PvxpuKx+kvT6+hvv1BEcpwriqjEiTK39Tsg2EH88/35Ytpol9NtulrAKrLOiTZ9jqg5DsGJuk9RD2ruBtLLlTxhK99lzOBeJir5x9ov8/YbjQm/Jfz12XrTDju8otdY9Evp6ktAE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757527676; c=relaxed/simple; bh=lvjFvmLxP/7PKRpYaAQ3B50hSH/GQf1L6Lmnk4ot/t0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Pzelk5S5z4LbIkb3UkZ9jaRgcNtUcCMca4Xhf2h3ZM+MC3z9roRd+4hMmYJs9jrcL0m0iDQVU3PuSE2vpEHxAsWVKgyBkbWlyuFIeRYH4CeGDCjH6tZldf6mHQFwtsS7XhhrqwNqYLUnmSbcJXap53yM4vOZS+HfiHGdv9Sityo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=j89f8+jJ; arc=none smtp.client-ip=148.163.156.1 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="j89f8+jJ" Received: from pps.filterd (m0353729.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 58AED70U006469; Wed, 10 Sep 2025 18:07:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=aZSOOebQUrIHkw8BJ Lju2cuzJWHQUor3PgL9GbAMuSk=; b=j89f8+jJ+qXsMeSzRESBf9GaGtzlZU8hJ QD+Td5qeks44t4oa6bZ075npUBpbIFiymNwbUXvZkWi/r0UNh7fyWBPLBE+44Aq1 d3o3+etFWfgcYfoQ3KbUvHlTR0ZeEj34X20+XCAKJPxUKoVFQY2PguoaJ+pq15Mj +pRk0DKSGH1r4kNHgeDtrn9i/phzKgbL2wIj6Pl6l9rjM5EQvVxq2xNgNsOcMRhm 5yfdUtJv5HaT7C74fjowLRulT5/WCzfzHEdeSOkQVecQ1y1NdI723Fqa+yeyk77K GttwP2cq+Q5VlMeoOkxp96b+yTtc8Ox5e/bZ83NiluSv6BeRrLi/Q== Received: from ppma12.dal12v.mail.ibm.com (dc.9e.1632.ip4.static.sl-reverse.com [50.22.158.220]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 490cmx0159-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 10 Sep 2025 18:07:52 +0000 (GMT) Received: from pps.filterd (ppma12.dal12v.mail.ibm.com [127.0.0.1]) by ppma12.dal12v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 58AG8eVZ011443; Wed, 10 Sep 2025 18:07:51 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma12.dal12v.mail.ibm.com (PPS) with ESMTPS id 490y9uj3u7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 10 Sep 2025 18:07:51 +0000 Received: from smtpav06.fra02v.mail.ibm.com (smtpav06.fra02v.mail.ibm.com [10.20.54.105]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 58AI7l0D23134672 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Sep 2025 18:07:47 GMT Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8427D20040; Wed, 10 Sep 2025 18:07:47 +0000 (GMT) Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4E86D2004B; Wed, 10 Sep 2025 18:07:47 +0000 (GMT) Received: from p-imbrenda.boeblingen.de.ibm.com (unknown [9.152.224.66]) by smtpav06.fra02v.mail.ibm.com (Postfix) with ESMTP; Wed, 10 Sep 2025 18:07:47 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: linux-s390@vger.kernel.org, borntraeger@de.ibm.com, frankja@linux.ibm.com, nsg@linux.ibm.com, nrb@linux.ibm.com, seiden@linux.ibm.com, schlameuss@linux.ibm.com, hca@linux.ibm.com, svens@linux.ibm.com, agordeev@linux.ibm.com, david@redhat.com, gerald.schaefer@linux.ibm.com Subject: [PATCH v2 03/20] KVM: s390: Add gmap_helper_set_unused() Date: Wed, 10 Sep 2025 20:07:29 +0200 Message-ID: <20250910180746.125776-4-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250910180746.125776-1-imbrenda@linux.ibm.com> References: <20250910180746.125776-1-imbrenda@linux.ibm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-GUID: 4DPVoji4Y7i0dRIc90_kzzsbmm9Pq8p_ X-Proofpoint-ORIG-GUID: 4DPVoji4Y7i0dRIc90_kzzsbmm9Pq8p_ X-Authority-Analysis: v=2.4 cv=J52q7BnS c=1 sm=1 tr=0 ts=68c1be78 cx=c_pps a=bLidbwmWQ0KltjZqbj+ezA==:117 a=bLidbwmWQ0KltjZqbj+ezA==:17 a=yJojWOMRYYMA:10 a=VnNF1IyMAAAA:8 a=yDhpJAYxToMn2qK6EzcA:9 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwOTA2MDAyNSBTYWx0ZWRfX+e0FMDCLubgv 0/wSVUCzkPUPtmBRJTH7pcJBPuCG/H3phlUbOiNEMJKonBQbBSHQDqucamJtqrcQFRbs7k5+5K/ A+gU8RNDbR98BJv+o+Hnl+8gbjTYZd0ycBaqMTHrSuq6migbab91EqwqM0c/6KwjUEfklxu/53r ziIikXnqAFkXTcomEWx2i5OkhVVGTReOWLZLTfgkoL36RMTPRv9iPACZV4eTbFn020XhS4zArZB DLZnhVmLReeB0AY2/Sg+ZGsyTyutxZzKyQSCUR5PcppIzLuMK8qO3+RT9+OnKwkT8a5GEsCWWJ9 ivmadf6Pm3IkTevQwzfJwuaQQ4jo9RWwDAgO7qzlKAdOy20iq+790r6zZp0TftMbGjDBAEswNvO 6+7xJDpl X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1117,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-09-10_03,2025-09-10_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 clxscore=1015 suspectscore=0 spamscore=0 phishscore=0 bulkscore=0 adultscore=0 malwarescore=0 priorityscore=1501 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2507300000 definitions=main-2509060025 Add gmap_helper_set_unused() to mark userspace ptes as unused. Core mm code will use that information to discard unused pages instead of attempting to swap them. Signed-off-by: Claudio Imbrenda --- arch/s390/include/asm/gmap_helpers.h | 1 + arch/s390/mm/gmap_helpers.c | 64 ++++++++++++++++++++++++++++ 2 files changed, 65 insertions(+) diff --git a/arch/s390/include/asm/gmap_helpers.h b/arch/s390/include/asm/gmap_helpers.h index 5356446a61c4..459bd39d0887 100644 --- a/arch/s390/include/asm/gmap_helpers.h +++ b/arch/s390/include/asm/gmap_helpers.h @@ -11,5 +11,6 @@ void gmap_helper_zap_one_page(struct mm_struct *mm, unsigned long vmaddr); void gmap_helper_discard(struct mm_struct *mm, unsigned long vmaddr, unsigned long end); int gmap_helper_disable_cow_sharing(void); +void gmap_helper_set_unused(struct mm_struct *mm, unsigned long vmaddr); #endif /* _ASM_S390_GMAP_HELPERS_H */ diff --git a/arch/s390/mm/gmap_helpers.c b/arch/s390/mm/gmap_helpers.c index a45d417ad951..69ffc0c6b654 100644 --- a/arch/s390/mm/gmap_helpers.c +++ b/arch/s390/mm/gmap_helpers.c @@ -91,6 +91,70 @@ void gmap_helper_discard(struct mm_struct *mm, unsigned long vmaddr, unsigned lo } EXPORT_SYMBOL_GPL(gmap_helper_discard); +/** + * gmap_helper_set_unused() - mark a pte entry as unused + * @mm: the mm + * @vmaddr: the userspace address whose pte is to be marked + * + * Mark the pte corresponding the given address as unused. This will cause + * core mm code to just drop this page instead of swapping it. + * + * This function needs to be called with interrupts disabled (for example + * while holding a spinlock), or while holding the mmap lock. Normally this + * function is called as a result of an unmap operation, and thus KVM common + * code will already hold kvm->mmu_lock in write mode. + * + * Context: Needs to be called while holding the mmap lock or with interrupts + * disabled. + */ +void gmap_helper_set_unused(struct mm_struct *mm, unsigned long vmaddr) +{ + pmd_t *pmdp, pmd, pmdval; + pud_t *pudp, pud; + p4d_t *p4dp, p4d; + pgd_t *pgdp, pgd; + spinlock_t *ptl; + pte_t *ptep; + + pgdp = pgd_offset(mm, vmaddr); + pgd = pgdp_get(pgdp); + if (pgd_none(pgd) || !pgd_present(pgd)) + return; + + p4dp = p4d_offset(pgdp, vmaddr); + p4d = p4dp_get(p4dp); + if (p4d_none(p4d) || !p4d_present(p4d)) + return; + + pudp = pud_offset(p4dp, vmaddr); + pud = pudp_get(pudp); + if (pud_none(pud) || pud_leaf(pud) || !pud_present(pud)) + return; + + pmdp = pmd_offset(pudp, vmaddr); + pmd = pmdp_get_lockless(pmdp); + if (pmd_none(pmd) || pmd_leaf(pmd) || !pmd_present(pmd)) + return; + + ptep = pte_offset_map_rw_nolock(mm, pmdp, vmaddr, &pmdval, &ptl); + if (!ptep) + return; + + if (spin_trylock(ptl)) { + /* + * Make sure the pte we are touching is still the correct + * one. In theory this check should not be needed, but + * better safe than sorry. + */ + if (likely(pmd_same(pmdval, pmdp_get_lockless(pmdp)))) + __atomic64_or(_PAGE_UNUSED, (long *)ptep); + spin_unlock(ptl); + } + + pte_unmap(ptep); +} +EXPORT_SYMBOL_GPL(gmap_helper_set_unused); + static int find_zeropage_pte_entry(pte_t *pte, unsigned long addr, unsigned long end, struct mm_walk *walk) { -- 2.51.0