From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-qv1-f53.google.com (mail-qv1-f53.google.com [209.85.219.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 204202F3C07 for ; Thu, 14 May 2026 00:54:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.53 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778720093; cv=none; b=qvegQ3JU9f7t17egyH3kz9tFaa7fA7kBX9WXPeKCMBhGunG5iU0LXXMGV8/rkC85nZ3IItrPfbL3KeUWHwBTt9zzEUfM0DRWYFyNCWWwI7fLxU7wenrhUNvCYZ0SMYcX9A/yqqNPRlov16MbLk5zqdze3AY969yoS3U5QTEl0hY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778720093; c=relaxed/simple; bh=bOKXQABVcrcKOgOPR+Iy0Lye30ttIrAHSft3p32x5HM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WG2PX33SFqPSWFPbd9UHocL/mVtuZnCPxCA8dFChPm/21S3JSFzEMvPQZ/eH/adBfw0AtyiQozlMLgdVdsDWkj3rUdSZL3oAxPLR1dYnmz6UlojV5hNTRNkefrkj2QLb/JSxQ09e/HMp2asIFbjjZS8fBiyeIWmZGX2fE3aJ++M= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=E+vzwzY4; arc=none smtp.client-ip=209.85.219.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="E+vzwzY4" Received: by mail-qv1-f53.google.com with SMTP id 6a1803df08f44-8b701756684so77261656d6.1 for ; Wed, 13 May 2026 17:54:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1778720091; x=1779324891; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OpHT3HTZZtG1cBkUUFs+hcciHpVVjeK0Jx3pQ0mUXQU=; b=E+vzwzY4N1vY62aFCCEYXo1ZpOwUF1gVwHMmpW4Icyq49iT+IBZhWDDpdsSIyPp561 15low+/Ll6ywcMUpIg4y3+e8yz4ZOYxYlCJAwrSSPz8QHhcixPSgLi4DBg6uVk0G4mWA Xf1pR+Klc4eMPBsvvDcm23GhYm1IGGMwCIcAS09uGfxexZ0B1VCxYtLcdJJxruiQsob6 AsDodIoGK8vNxu0TZw8TpiazKgtzW/tpQ09b7r/igFWvQbkJXuxQ/tjqJPN5y5AxQmza fOVWIbRjaZpxUHVOI9eFQpOASUax5Ty4d8a/Vol2BXOUr175lw6ykc7uf7WXzwdMuYKz /1JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1778720091; x=1779324891; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=OpHT3HTZZtG1cBkUUFs+hcciHpVVjeK0Jx3pQ0mUXQU=; b=WbZgD/CWwJJHOO5W3Sx9mTZNBQDl/whvZ2C2H/xSsrjnaKQruDoFc3nYfrCBZ3w4u/ ngmcRC7JHLdzCdvWAZH/ScHwAYPPqVeJ9dmIJjZ+z27+nVcJa0IJJWkV3oTbzD+yGQ4w k3tH6u9yKzJDNwAN9q+jYZCVZzkHxm6Xn6qhlMFqDc/kIoQ/UcEoVXGJANV5qa3pP+fY DSWyWiKT6HKXOEXZ97AyvMjdon3i44Gp+lp1r+b8eOvha/q6Gdz0gTYfJQe6eiq8Jvrz HFaxzyvL3UMvncN2rGpYDyXKfuejUzKdBK67JTGaJsQfisEYtCUujYSduVCzx+RnScQn BWiw== X-Forwarded-Encrypted: i=1; AFNElJ+9MJstKGvihjy6Yqje7okVzHuu2N95rk8hQi4kwLt0IctqBCi/+fXUCgdQCllNyYkbZcyiTWUPG6MHNJQ=@vger.kernel.org X-Gm-Message-State: AOJu0YyZfss5ET14IjwwfJDfLxzLyQh86BBpwjeHTLiNtJk/0sBBhm97 0txY3NciwhZ9EziCMfjvM2bBq0gGONTGFQ2gIPvG1eAIHeEPd1Qhyah4 X-Gm-Gg: Acq92OH/Lh1s2syBa/gFSou+oI4d5kzni9vqGTo2wBCuR5phoWlBFOYqLpxb03GgOe2 x7BDgk+C3d1/o+9zR9Ibi2cMutgLYEU9k4MlekoJb2L0gAfggLNYMXrrGY7nyatzFr+fBJcilLv DPtfq6Wz9stMcQXzqyjSNPCRzDaklQF8c9tTai69OhUEybuoC2blYUXskUfuj4T1bBC9kYQcfQ1 5uXWkmXp7Tk0A9i2uchuiBTIw7zzF1TZev8z4RFdwvjMBq9pS0ecw9sCR5p3atVIsVNYGCKdg2R KwFQQeT/l+J2/suQFFPKcjccNqRhL+GbGhzUylc+0Q0QOSN/1VJVMg3TCF64pLEQ5ypet47KBlw lKFURGawzHEK8zCh2ElWfbCkLKOjrKje162StDehOOZMfajmjWUe7/P2zSbinAQHkmkvuNHAFoV DIyq+GzUllAs6CEzrHkVKS3xgliyrMySA/bTaVWTU80hxPoGxo6kVUU5JtiXv6yXbpA/2gRWB5k 0sD70pQSLYvGpqQ/0qAXirnzWbA4JIrdlAcgxuUvAI= X-Received: by 2002:a05:6214:4a89:b0:8ac:b2e1:3793 with SMTP id 6a1803df08f44-8c8fdef72f9mr27057886d6.25.1778720090993; Wed, 13 May 2026 17:54:50 -0700 (PDT) Received: from server0.tail6e7dd.ts.net (c-68-48-65-54.hsd1.mi.comcast.net. [68.48.65.54]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-8c908e11929sm10399986d6.14.2026.05.13.17.54.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 May 2026 17:54:50 -0700 (PDT) From: Michael Bommarito To: Andrew Morton , Mike Rapoport , Peter Xu Cc: David Carlier , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/1] mm/userfaultfd: validate effective UFFDIO_COPY ops after retry Date: Wed, 13 May 2026 20:54:40 -0400 Message-ID: <20260514005440.3361406-2-michael.bommarito@gmail.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260514005440.3361406-1-michael.bommarito@gmail.com> References: <20260514005440.3361406-1-michael.bommarito@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 7bit UFFDIO_COPY fills MAP_PRIVATE file-backed VMAs with anonymous memory. mfill_atomic_pte_copy() implements that by overriding the VMA's uffd ops with anon_uffd_ops when VM_SHARED is not set. mfill_copy_folio_retry() can drop the destination VMA lock after an initial copy_from_user() failure and reacquire the destination VMA. It currently checks whether vma_uffd_ops() changed while the lock was dropped, but that is not the same as checking whether the effective UFFDIO_COPY ops changed. Private and shared shmem VMAs both expose shmem_uffd_ops through vm_ops. If a private shmem destination is replaced with a shared shmem destination while the retry has dropped the lock, vma_uffd_ops() still compares equal even though the effective copy ops changed from anon_uffd_ops to shmem_uffd_ops. The stale anon folio can then be installed into the new shared shmem VMA. mfill_atomic_install_pte() sees a folio without page-cache mapping and calls folio_add_new_anon_rmap(), which reaches BUG_ON(!anon_vma) because the new shared shmem VMA has no anon_vma. Compare both the raw VMA uffd ops and the effective UFFDIO_COPY ops across the retry. The raw comparison preserves the existing VMA-type replacement guard, while the effective comparison also catches replacements where the raw ops stay equal but the MAP_PRIVATE override result changes. If either comparison changes, return -EAGAIN and let the ioctl retry instead of installing the stale folio through the wrong path. Fixes: 292411fda25b ("mm/userfaultfd: detect VMA type change after copy retry in mfill_copy_folio_retry()") Assisted-by: Codex:gpt-5-5-xhigh Assisted-by: Claude:opus-4-7 Signed-off-by: Michael Bommarito --- mm/userfaultfd.c | 40 ++++++++++++++++++++++++---------------- 1 file changed, 24 insertions(+), 16 deletions(-) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 180bad42fc79..5af13953c29a 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -69,6 +69,24 @@ static const struct vm_uffd_ops *vma_uffd_ops(struct vm_area_struct *vma) return vma->vm_ops ? vma->vm_ops->uffd_ops : NULL; } +static const struct vm_uffd_ops *vma_uffd_copy_ops(struct vm_area_struct *vma) +{ + const struct vm_uffd_ops *ops = vma_uffd_ops(vma); + + if (!ops) + return NULL; + + /* + * UFFDIO_COPY fills MAP_PRIVATE file-backed mappings as anonymous + * memory. This is an effective ops override, so retry validation must + * compare the override result, not just vma->vm_ops->uffd_ops. + */ + if (!(vma->vm_flags & VM_SHARED)) + return &anon_uffd_ops; + + return ops; +} + static __always_inline bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end) { @@ -447,6 +465,7 @@ static int mfill_copy_folio_retry(struct mfill_state *state, struct folio *folio) { const struct vm_uffd_ops *orig_ops = vma_uffd_ops(state->vma); + const struct vm_uffd_ops *orig_copy_ops = vma_uffd_copy_ops(state->vma); unsigned long src_addr = state->src_addr; void *kaddr; int err; @@ -469,10 +488,11 @@ static int mfill_copy_folio_retry(struct mfill_state *state, /* * The VMA type may have changed while the lock was dropped - * (e.g. replaced with a hugetlb mapping), making the caller's - * ops pointer stale. + * (e.g. replaced with a hugetlb mapping). Also catch changes where + * the raw ops stay equal but the effective UFFDIO_COPY ops differ. */ - if (vma_uffd_ops(state->vma) != orig_ops) + if (vma_uffd_ops(state->vma) != orig_ops || + vma_uffd_copy_ops(state->vma) != orig_copy_ops) return -EAGAIN; err = mfill_establish_pmd(state); @@ -545,19 +565,7 @@ static int __mfill_atomic_pte(struct mfill_state *state, static int mfill_atomic_pte_copy(struct mfill_state *state) { - const struct vm_uffd_ops *ops = vma_uffd_ops(state->vma); - - /* - * The normal page fault path for a MAP_PRIVATE mapping in a - * file-backed VMA will invoke the fault, fill the hole in the file and - * COW it right away. The result generates plain anonymous memory. - * So when we are asked to fill a hole in a MAP_PRIVATE mapping, we'll - * generate anonymous memory directly without actually filling the - * hole. For the MAP_PRIVATE case the robustness check only happens in - * the pagetable (to verify it's still none) and not in the page cache. - */ - if (!(state->vma->vm_flags & VM_SHARED)) - ops = &anon_uffd_ops; + const struct vm_uffd_ops *ops = vma_uffd_copy_ops(state->vma); return __mfill_atomic_pte(state, ops); } -- 2.46.0