From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C02722259F; Mon, 20 Apr 2026 15:54:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776700486; cv=none; b=NN9YhdhWbU3a4gaia+NiQfoJTTTl25PnMQlMWJ/1DO+wZL3rTj3HeKTgYt/MRU7vEgMePGduvVppO6f8OA9LRHG7tcdtBAnQ6CVpVfu8jv4VGcu8gp39CISN+5eqvvfpD/6x8xhwBiReLLAdXJfy2Znn51bqxWm0g5WzG1MWfRg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776700486; c=relaxed/simple; bh=VNvIXAj0TyK9ZloeCBDS+I8h8Ozd+xanc/gkWZn5QXM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QNAPFVVx+t1dTor7ezoIHlYSWbDWpmDgkxjz8LJZr0xraBM69wazmJR1KduHyUDSN9NFvylP/gPgZi3by281n60CrkWW5g+wp8q1jNS17DGOC3C86XH6N5cpgIDgaK8ELH+1v23I5ExyLy6djt+VanM0kFlNT4ItcVzAaHTc7sE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=a+MQNsJp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="a+MQNsJp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 330A0C19425; Mon, 20 Apr 2026 15:54:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1776700486; bh=VNvIXAj0TyK9ZloeCBDS+I8h8Ozd+xanc/gkWZn5QXM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=a+MQNsJpgpk1xSBz90FCriQdDdd8xyLdAaZF6plvac+x3RyJMQvwRFAF/phJbpQey zNZLTkYNBIpJwpZRILCyvN3tyoxwuOOMYILk/mVBT+xzgCkQvOpsi66lziCMyjo8R4 gvDB8g3FADcW4EWcdAAYEvZoMPzlAd77kSabrnA8= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Liam Merwick , Sean Christopherson Subject: [PATCH 6.19 192/220] KVM: SEV: Drop WARN on large size for KVM_MEMORY_ENCRYPT_REG_REGION Date: Mon, 20 Apr 2026 17:42:13 +0200 Message-ID: <20260420153940.938531219@linuxfoundation.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260420153934.013228280@linuxfoundation.org> References: <20260420153934.013228280@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.19-stable review patch. If anyone has any objections, please let me know. ------------------ From: Sean Christopherson commit 8acffeef5ef720c35e513e322ab08e32683f32f2 upstream. Drop the WARN in sev_pin_memory() on npages overflowing an int, as the WARN is comically trivially to trigger from userspace, e.g. by doing: struct kvm_enc_region range = { .addr = 0, .size = -1ul, }; __vm_ioctl(vm, KVM_MEMORY_ENCRYPT_REG_REGION, &range); Note, the checks in sev_mem_enc_register_region() that presumably exist to verify the incoming address+size are completely worthless, as both "addr" and "size" are u64s and SEV is 64-bit only, i.e. they _can't_ be greater than ULONG_MAX. That wart will be cleaned up in the near future. if (range->addr > ULONG_MAX || range->size > ULONG_MAX) return -EINVAL; Opportunistically add a comment to explain why the code calculates the number of pages the "hard" way, e.g. instead of just shifting @ulen. Fixes: 78824fabc72e ("KVM: SVM: fix svn_pin_memory()'s use of get_user_pages_fast()") Cc: stable@vger.kernel.org Reviewed-by: Liam Merwick Tested-by: Liam Merwick Link: https://patch.msgid.link/20260313003302.3136111-2-seanjc@google.com Signed-off-by: Sean Christopherson Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/svm/sev.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -683,10 +683,16 @@ static struct page **sev_pin_memory(stru if (ulen == 0 || uaddr + ulen < uaddr) return ERR_PTR(-EINVAL); - /* Calculate number of pages. */ + /* + * Calculate the number of pages that need to be pinned to cover the + * entire range. Note! This isn't simply ulen >> PAGE_SHIFT, as KVM + * doesn't require the incoming address+size to be page aligned! + */ first = (uaddr & PAGE_MASK) >> PAGE_SHIFT; last = ((uaddr + ulen - 1) & PAGE_MASK) >> PAGE_SHIFT; npages = (last - first + 1); + if (npages > INT_MAX) + return ERR_PTR(-EINVAL); locked = sev->pages_locked + npages; lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; @@ -695,9 +701,6 @@ static struct page **sev_pin_memory(stru return ERR_PTR(-ENOMEM); } - if (WARN_ON_ONCE(npages > INT_MAX)) - return ERR_PTR(-EINVAL); - /* Avoid using vmalloc for smaller buffers. */ size = npages * sizeof(struct page *); if (size > PAGE_SIZE)