From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9AB85296166 for ; Wed, 9 Jul 2025 11:00:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752058835; cv=none; b=T/tqGXQxBYs50WEByFRsnR5SBN7g3gp8RIaGLGQhaKhfDzpNQcDFYcnHHjG8nimFxOZLnaus3TVTavLW87lPgYiMo37XlrfOG/GwPxUL00fbJVDnPnfzNlPTrlpb2/80L7ETvLVNu/0KwSoKjVN2EvoSjaxN4+qc1mzVTXFIEx8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752058835; c=relaxed/simple; bh=ieE9uakSkbHoNeSHVGqSyQEMtw9HWe5ggnyvIaywshE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QI2mJP3GKt8epYOYKV8JZ269qty5JS6Dp2a5EvxfX6jTUWLRWGS1gKQgqOOyb9wdHYl3CmHpeqgVSfmUW+Loc3Rf905kx2YuaTjuZbW/wleEgSYFm7b/mMj5EooSzGijyyds8TnZz5BmKNLEZbNOJW1TYILX9LpTxVksxv8L3U8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4TpFvYnX; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4TpFvYnX" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-453691d0a1dso31044035e9.0 for ; Wed, 09 Jul 2025 04:00:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752058832; x=1752663632; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=thCPZ9YOQBveI+hJX+S0duGcG+bqAQLlDkhewk99Mu4=; b=4TpFvYnXpEc1SuRpCp/EBP5sLq/suvda6bDEEDouUPRpL9tzavAVCXIrPL6mUFdPU6 3Zd5rkc07qPXW5TkpIzpsUH+4RqBRqTQUJOX/Eu5yPDj6O7wdA7cXYA/awHFJZh8O/W/ rrhkOd/e/CyDuBGzF2v4e1JDRMNDzy6ge4e5jKIkmkREb+yY1QDnfmkr1gOP2e5UDYzX c8I1uej/BXFVNisiudaLfLyNzyBAyGcSSoKffbc4/HCpaI3ODBCjMkchFDj9gTRs6bBd qQhnlp7sT8+h1aYBjPNvucXvHsLHUkQnHO6KHWnEneiPf7SQsIXwBJyJJRPI4uEP21gt +EIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752058832; x=1752663632; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=thCPZ9YOQBveI+hJX+S0duGcG+bqAQLlDkhewk99Mu4=; b=D6W9hqVOx6JdBixSp28NxeWBrDMF6eTQbqcGWarGfsBGg1WUr9NS1zGukTjzaPxx+i 9hLsSu2kGd1kWJS+JDox814bdI7HuTR53rqBBZk1AYSy1EgVcFf5/EZevpxnY1/36CSx T1U9xTe1SIWPqXia8RnLLihg/rSCkJowYGxzjH+w0lfc5/Hg+6Zanjr1nLB1/1/WF5ne 7fnYayN3dOmRfGK6oKzOMHNghqlMgkbsB0r68o88CWm/6r3SuLyIva+WtdUzc3dwCHOo S/F+PM4uaBu6bkufIxN6sFroXzc7YFEJdAOOuLfJm2jWVUGWQUtHagDatISpiDx4Bowz +LTA== X-Gm-Message-State: AOJu0YwMWdBGR+L3Pc1doADIDFRvPXH/829j+xDNSZtkNbYWEYYKUKHL L2pQtE8CAU44Xpc8rvuICE7jk7Tw30yT+v6sJnEMz+ec3YGXj6xFodjufjmwvBADG5lB9GXghmE 0QAFxDJkT8eCb6FhQn+hurq5XiBIYR7DXYllOiCpjeJwTrb7P2pXhVlFs9FdGFbimrMF41epd3W OralUA/Mssi69fKJMyalVznC+sjVU= X-Google-Smtp-Source: AGHT+IF2XT2tnjL/nowvMlmzYI0N+oJYkICMffXnsh3eb5K0Oy7M01IAdBl/YiRq34DlqZ0t5wrDKwwuLw== X-Received: from wmpz17.prod.google.com ([2002:a05:600c:a11:b0:453:f28:e99f]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e09:b0:453:1e14:6387 with SMTP id 5b1f17b1804b1-454d53f7645mr17031395e9.32.1752058831174; Wed, 09 Jul 2025 04:00:31 -0700 (PDT) Date: Wed, 9 Jul 2025 11:59:46 +0100 In-Reply-To: <20250709105946.4009897-1-tabba@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250709105946.4009897-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250709105946.4009897-21-tabba@google.com> Subject: [PATCH v13 20/20] KVM: selftests: guest_memfd mmap() test when mmap is supported From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Content-Type: text/plain; charset="UTF-8" Expand the guest_memfd selftests to comprehensively test host userspace mmap functionality for guest_memfd-backed memory when supported by the VM type. Introduce new test cases to verify the following: * Successful mmap operations: Ensure that MAP_SHARED mappings succeed when guest_memfd mmap is enabled. * Data integrity: Validate that data written to the mmap'd region is correctly persistent and readable. * fallocate interaction: Test that fallocate(FALLOC_FL_PUNCH_HOLE) correctly zeros out mapped pages. * Out-of-bounds access: Verify that accessing memory beyond the guest_memfd's size correctly triggers a SIGBUS signal. * Unsupported mmap: Confirm that mmap attempts fail as expected when guest_memfd mmap support is not enabled for the specific guest_memfd instance or VM type. * Flag validity: Introduce test_vm_type_gmem_flag_validity() to systematically test that only allowed guest_memfd creation flags are accepted for different VM types (e.g., GUEST_MEMFD_FLAG_MMAP for default VMs, no flags for CoCo VMs). The existing tests for guest_memfd creation (multiple instances, invalid sizes), file read/write, file size, and invalid punch hole operations are integrated into the new test_with_type() framework to allow testing across different VM types. Reviewed-by: James Houghton Reviewed-by: Gavin Shan Reviewed-by: Shivank Garg Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- .../testing/selftests/kvm/guest_memfd_test.c | 197 ++++++++++++++++-- 1 file changed, 176 insertions(+), 21 deletions(-) diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index 341ba616cf55..1252e74fbb8f 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -13,6 +13,8 @@ #include #include +#include +#include #include #include #include @@ -34,12 +36,83 @@ static void test_file_read_write(int fd) "pwrite on a guest_mem fd should fail"); } -static void test_mmap(int fd, size_t page_size) +static void test_mmap_supported(int fd, size_t page_size, size_t total_size) +{ + const char val = 0xaa; + char *mem; + size_t i; + int ret; + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); + TEST_ASSERT(mem == MAP_FAILED, "Copy-on-write not allowed by guest_memfd."); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmap() for guest_memfd should succeed."); + + memset(mem, val, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(READ_ONCE(mem[i]), val); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, + page_size); + TEST_ASSERT(!ret, "fallocate the first page should succeed."); + + for (i = 0; i < page_size; i++) + TEST_ASSERT_EQ(READ_ONCE(mem[i]), 0x00); + for (; i < total_size; i++) + TEST_ASSERT_EQ(READ_ONCE(mem[i]), val); + + memset(mem, val, page_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(READ_ONCE(mem[i]), val); + + ret = munmap(mem, total_size); + TEST_ASSERT(!ret, "munmap() should succeed."); +} + +static sigjmp_buf jmpbuf; +void fault_sigbus_handler(int signum) +{ + siglongjmp(jmpbuf, 1); +} + +static void test_fault_overflow(int fd, size_t page_size, size_t total_size) +{ + struct sigaction sa_old, sa_new = { + .sa_handler = fault_sigbus_handler, + }; + size_t map_size = total_size * 4; + const char val = 0xaa; + char *mem; + size_t i; + int ret; + + mem = mmap(NULL, map_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmap() for guest_memfd should succeed."); + + sigaction(SIGBUS, &sa_new, &sa_old); + if (sigsetjmp(jmpbuf, 1) == 0) { + memset(mem, 0xaa, map_size); + TEST_ASSERT(false, "memset() should have triggered SIGBUS."); + } + sigaction(SIGBUS, &sa_old, NULL); + + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(READ_ONCE(mem[i]), val); + + ret = munmap(mem, map_size); + TEST_ASSERT(!ret, "munmap() should succeed."); +} + +static void test_mmap_not_supported(int fd, size_t page_size, size_t total_size) { char *mem; mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); TEST_ASSERT_EQ(mem, MAP_FAILED); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT_EQ(mem, MAP_FAILED); } static void test_file_size(int fd, size_t page_size, size_t total_size) @@ -120,26 +193,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) } } -static void test_create_guest_memfd_invalid(struct kvm_vm *vm) +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm, + uint64_t guest_memfd_flags, + size_t page_size) { - size_t page_size = getpagesize(); - uint64_t flag; size_t size; int fd; for (size = 1; size < page_size; size++) { - fd = __vm_create_guest_memfd(vm, size, 0); - TEST_ASSERT(fd == -1 && errno == EINVAL, + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags); + TEST_ASSERT(fd < 0 && errno == EINVAL, "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL", size); } - - for (flag = BIT(0); flag; flag <<= 1) { - fd = __vm_create_guest_memfd(vm, page_size, flag); - TEST_ASSERT(fd == -1 && errno == EINVAL, - "guest_memfd() with flag '0x%lx' should fail with EINVAL", - flag); - } } static void test_create_guest_memfd_multiple(struct kvm_vm *vm) @@ -171,30 +237,119 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) close(fd1); } -int main(int argc, char *argv[]) +static bool check_vm_type(unsigned long vm_type) { - size_t page_size; + /* + * Not all architectures support KVM_CAP_VM_TYPES. However, those that + * support guest_memfd have that support for the default VM type. + */ + if (vm_type == VM_TYPE_DEFAULT) + return true; + + return kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type); +} + +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags, + bool expect_mmap_allowed) +{ + struct kvm_vm *vm; size_t total_size; + size_t page_size; int fd; - struct kvm_vm *vm; - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + if (!check_vm_type(vm_type)) + return; page_size = getpagesize(); total_size = page_size * 4; - vm = vm_create_barebones(); + vm = vm_create_barebones_type(vm_type); - test_create_guest_memfd_invalid(vm); test_create_guest_memfd_multiple(vm); + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size); - fd = vm_create_guest_memfd(vm, total_size, 0); + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags); test_file_read_write(fd); - test_mmap(fd, page_size); + + if (expect_mmap_allowed) { + test_mmap_supported(fd, page_size, total_size); + test_fault_overflow(fd, page_size, total_size); + + } else { + test_mmap_not_supported(fd, page_size, total_size); + } + test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); close(fd); + kvm_vm_free(vm); +} + +static void test_vm_type_gmem_flag_validity(unsigned long vm_type, + uint64_t expected_valid_flags) +{ + size_t page_size = getpagesize(); + struct kvm_vm *vm; + uint64_t flag = 0; + int fd; + + if (!check_vm_type(vm_type)) + return; + + vm = vm_create_barebones_type(vm_type); + + for (flag = BIT(0); flag; flag <<= 1) { + fd = __vm_create_guest_memfd(vm, page_size, flag); + + if (flag & expected_valid_flags) { + TEST_ASSERT(fd >= 0, + "guest_memfd() with flag '0x%lx' should be valid", + flag); + close(fd); + } else { + TEST_ASSERT(fd < 0 && errno == EINVAL, + "guest_memfd() with flag '0x%lx' should fail with EINVAL", + flag); + } + } + + kvm_vm_free(vm); +} + +static void test_gmem_flag_validity(void) +{ + uint64_t non_coco_vm_valid_flags = 0; + + if (kvm_has_cap(KVM_CAP_GMEM_MMAP)) + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_MMAP; + + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags); + +#ifdef __x86_64__ + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0); +#endif +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + + test_gmem_flag_validity(); + + test_with_type(VM_TYPE_DEFAULT, 0, false); + if (kvm_has_cap(KVM_CAP_GMEM_MMAP)) { + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_MMAP, + true); + } + +#ifdef __x86_64__ + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false); +#endif } -- 2.50.0.727.gbf7dc18ff4-goog