From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DDA0E101EE for ; Tue, 25 Jun 2024 00:52:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719276723; cv=none; b=ZwAhvVS8k92jjCwXviPWAew0+sy1rw0v80yUWRo4Jh7DL+69L0dZSi01mFKLxFgfqFRIcCN7ZrAhAXdsgOgAQudnWz7eQF/hDB2DaIf0cIJdS5IQhw7TLq4QCxiAqGzPAgIVzDHA3FrJsUPGXnXS4d7dBU4KU7UlJz98SrO8Sdw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719276723; c=relaxed/simple; bh=4ZKZLbK78Y+NhluW70xeh3/05Aglw13j15lXwZY8haA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WmlCtPXuUPhkpfrcKnTmKc3a6Gf7+/64r8691F6a2mlEIF8EkG2TWcs+CIlmkZHcvD3jdRXwyRKuiYli355RkL5RzT3nCQkRnOvuYi0P0pF78ooCTarIwgywrv2tr1uBBzJQx1armxXD3r95Mi2My8xFwtdKpxV/VXlRMfOmXtU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=UTgu9XYI; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="UTgu9XYI" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-1f9c6e59d34so41321825ad.2 for ; Mon, 24 Jun 2024 17:52:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1719276721; x=1719881521; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=X2tgkU9aGqZTPe2pnjnSc9vrh+00DekoLDeyb7G9kKU=; b=UTgu9XYIjqwWqVmx4xAmbQ9AaPfDEUA/g+nTVhGeRxki1u2jBrZorbg8epLEpr12NM bo53lplUi+4mwpXdGxRlmgP2cOZISKdM+NvYeyaCO40YmO+1J1RNRy3SktbgobiH/A4u lz1PtNhAO1M655wFxaBlwfHsq/nEDP00b5uOqTUONgYnoG7DERpsmJ8HN42Y/MlHvrWC 6XhGEy4c/KdBCr0e+VL1YSUGIGr+guVuvvKmjTBlPpKWcM9C4EGMHy8kjACDOHOfMogh 7TU7mTtldnEK+BlQ4B9AeyObCVJ5aD3i4qcI2n4hwj1XmLNb3nZuscOXekhA0YhaFphK ZP+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719276721; x=1719881521; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=X2tgkU9aGqZTPe2pnjnSc9vrh+00DekoLDeyb7G9kKU=; b=IiS9SJkYAUOmaf/uM9YH2nSRZ5TnYr+MR5FngxXgJKW+8Wj1/07KzFXjn34d3S20iT 5Hs8V+7kb8mzbMnml+C1vRp5mk397XQpxfZgfd9t7t3iLpo57CXICM7z3VoiRJXgHWAO VPf8qojrdDd281gGCyZIXVYuUbKDrSiZ9p0eBa3mka0SSmj1GyLnf2X7zQCg+WF+sWBj hLv8aFU1p70WF9Le9Oc0WJ6wc6LDdbnsd7Mq4CbwWtbVVdCxGMmDuN7UHP3x3ec0z8Ql vqlCwxDBRmhoTO8/DNcSqxlOmqARqeLq10T/RNtoB30PbDetEiRkGIjfMON8VSNJWTZJ rgCg== X-Forwarded-Encrypted: i=1; AJvYcCWBtzoVhUXwuyP6c6L1e+M1KgPu2tjQ0KzOKi8NUsqcFOxOwDjJTvnz/KlO0CGRZZrg8cU1H427LUUr8WVlw47Edy0bSmiAk6qe X-Gm-Message-State: AOJu0YzH124tLnkC+BuvSNHMvn2C2Iy4a1SIw6UxFU6gBcAESYPBsjj9 as+p9PQzgHwymV+xf4acj0D+YsP5Qyh0EQeQuLgJAgaqz2LjQc8yOY7RijUsvR0= X-Google-Smtp-Source: AGHT+IGpdZyGHhs1Is7AfSNYZzr49EaMRTC68QJZWXouQ1fwjwreylkPVn4YjeOZWIUSu9DqSMA2EQ== X-Received: by 2002:a17:902:f685:b0:1f7:2050:9a76 with SMTP id d9443c01a7336-1fa1d3de5bcmr90910035ad.8.1719276721098; Mon, 24 Jun 2024 17:52:01 -0700 (PDT) Received: from jesse-desktop.. (pool-108-26-179-17.bstnma.fios.verizon.net. [108.26.179.17]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f9ebbb2a7csm68150235ad.256.2024.06.24.17.51.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Jun 2024 17:52:00 -0700 (PDT) From: Jesse Taube To: linux-riscv@lists.infradead.org Cc: Jonathan Corbet , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= , Evan Green , Andrew Jones , Jesse Taube , Charlie Jenkins , Xiao Wang , Andy Chiu , Eric Biggers , Greentime Hu , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Heiko Stuebner , Costa Shulyupin , Andrew Morton , Baoquan He , Anup Patel , Zong Li , Sami Tolvanen , Ben Dooks , Alexandre Ghiti , "Gustavo A. R. Silva" , Erick Archer , Joel Granados , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org Subject: [PATCH v3 7/8] RISC-V: Report vector unaligned access speed hwprobe Date: Mon, 24 Jun 2024 20:50:00 -0400 Message-ID: <20240625005001.37901-8-jesse@rivosinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240625005001.37901-1-jesse@rivosinc.com> References: <20240625005001.37901-1-jesse@rivosinc.com> Precedence: bulk X-Mailing-List: linux-doc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Detect if vector misaligned accesses are faster or slower than equivalent vector byte accesses. This is useful for usermode to know whether vector byte accesses or vector misaligned accesses have a better bandwidth for operations like memcpy. Signed-off-by: Jesse Taube --- V1 -> V2: - Add Kconfig options - Add WORD_EEW to vec-copy-unaligned.S V2 -> V3: - Remove unnecessary comment - Remove local_irq_enable --- arch/riscv/Kconfig | 18 +++ arch/riscv/kernel/Makefile | 3 +- arch/riscv/kernel/copy-unaligned.h | 5 + arch/riscv/kernel/sys_hwprobe.c | 6 + arch/riscv/kernel/unaligned_access_speed.c | 132 ++++++++++++++++++++- arch/riscv/kernel/vec-copy-unaligned.S | 58 +++++++++ 6 files changed, 219 insertions(+), 3 deletions(-) create mode 100644 arch/riscv/kernel/vec-copy-unaligned.S diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index ffbe0fdd7fb3..6f9fd3748916 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -807,6 +807,24 @@ config RISCV_PROBE_VECTOR_UNALIGNED_ACCESS will dynamically determine the speed of vector unaligned accesses on the underlying system if they are supported. +config RISCV_SLOW_VEC_UNALIGNED_ACCESS + bool "Assume the system supports slow vector unaligned memory accesses" + depends on NONPORTABLE + help + Assume that the system supports slow vector unaligned memory accesses. The + kernel and userspace programs may not be able to run at all on systems + that do not support unaligned memory accesses. + +config RISCV_EFFICIENT_VEC_UNALIGNED_ACCESS + bool "Assume the system supports fast vector unaligned memory accesses" + depends on NONPORTABLE + help + Assume that the system supports fast vector unaligned memory accesses. When + enabled, this option improves the performance of the kernel on such + systems. However, the kernel and userspace programs will run much more + slowly, or will not be able to run at all, on systems that do not + support efficient unaligned memory accesses. + endchoice endmenu # "Platform type" diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 5b243d46f4b1..291935a084d5 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -64,7 +64,8 @@ obj-$(CONFIG_MMU) += vdso.o vdso/ obj-$(CONFIG_RISCV_MISALIGNED) += traps_misaligned.o obj-$(CONFIG_RISCV_MISALIGNED) += unaligned_access_speed.o -obj-$(CONFIG_RISCV_PROBE_UNALIGNED_ACCESS) += copy-unaligned.o +obj-$(CONFIG_RISCV_PROBE_UNALIGNED_ACCESS) += copy-unaligned.o +obj-$(CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS) += vec-copy-unaligned.o obj-$(CONFIG_FPU) += fpu.o obj-$(CONFIG_FPU) += kernel_mode_fpu.o diff --git a/arch/riscv/kernel/copy-unaligned.h b/arch/riscv/kernel/copy-unaligned.h index e3d70d35b708..85d4d11450cb 100644 --- a/arch/riscv/kernel/copy-unaligned.h +++ b/arch/riscv/kernel/copy-unaligned.h @@ -10,4 +10,9 @@ void __riscv_copy_words_unaligned(void *dst, const void *src, size_t size); void __riscv_copy_bytes_unaligned(void *dst, const void *src, size_t size); +#ifdef CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS +void __riscv_copy_vec_words_unaligned(void *dst, const void *src, size_t size); +void __riscv_copy_vec_bytes_unaligned(void *dst, const void *src, size_t size); +#endif + #endif /* __RISCV_KERNEL_COPY_UNALIGNED_H */ diff --git a/arch/riscv/kernel/sys_hwprobe.c b/arch/riscv/kernel/sys_hwprobe.c index 5b78ea5a84d1..46de3f145154 100644 --- a/arch/riscv/kernel/sys_hwprobe.c +++ b/arch/riscv/kernel/sys_hwprobe.c @@ -221,6 +221,12 @@ static u64 hwprobe_vec_misaligned(const struct cpumask *cpus) #else static u64 hwprobe_vec_misaligned(const struct cpumask *cpus) { + if (IS_ENABLED(CONFIG_RISCV_EFFICIENT_VEC_UNALIGNED_ACCESS)) + return RISCV_HWPROBE_VEC_MISALIGNED_FAST; + + if (IS_ENABLED(CONFIG_RISCV_SLOW_VEC_UNALIGNED_ACCESS)) + return RISCV_HWPROBE_VEC_MISALIGNED_SLOW; + return RISCV_HWPROBE_VEC_MISALIGNED_UNKNOWN; } #endif diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c index 8489e012cf23..a2a8f948500b 100644 --- a/arch/riscv/kernel/unaligned_access_speed.c +++ b/arch/riscv/kernel/unaligned_access_speed.c @@ -8,9 +8,11 @@ #include #include #include +#include #include #include #include +#include #include "copy-unaligned.h" @@ -267,9 +269,129 @@ static int check_unaligned_access_speed_all_cpus(void) } #endif +#ifdef CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS +static void check_vector_unaligned_access(struct work_struct *unused) +{ + int cpu = smp_processor_id(); + u64 start_cycles, end_cycles; + u64 word_cycles; + u64 byte_cycles; + int ratio; + unsigned long start_jiffies, now; + struct page *page; + void *dst; + void *src; + long speed = RISCV_HWPROBE_VEC_MISALIGNED_SLOW; + + if (per_cpu(vector_misaligned_access, cpu) != RISCV_HWPROBE_VEC_MISALIGNED_SLOW) + return; + + page = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER); + if (!page) { + pr_warn("Allocation failure, not measuring vector misaligned performance\n"); + return; + } + + /* Make an unaligned destination buffer. */ + dst = (void *)((unsigned long)page_address(page) | 0x1); + /* Unalign src as well, but differently (off by 1 + 2 = 3). */ + src = dst + (MISALIGNED_BUFFER_SIZE / 2); + src += 2; + word_cycles = -1ULL; + + /* Do a warmup. */ + kernel_vector_begin(); + __riscv_copy_vec_words_unaligned(dst, src, MISALIGNED_COPY_SIZE); + + start_jiffies = jiffies; + while ((now = jiffies) == start_jiffies) + cpu_relax(); + + /* + * For a fixed amount of time, repeatedly try the function, and take + * the best time in cycles as the measurement. + */ + while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) { + start_cycles = get_cycles64(); + /* Ensure the CSR read can't reorder WRT to the copy. */ + mb(); + __riscv_copy_vec_words_unaligned(dst, src, MISALIGNED_COPY_SIZE); + /* Ensure the copy ends before the end time is snapped. */ + mb(); + end_cycles = get_cycles64(); + if ((end_cycles - start_cycles) < word_cycles) + word_cycles = end_cycles - start_cycles; + } + + byte_cycles = -1ULL; + __riscv_copy_vec_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE); + start_jiffies = jiffies; + while ((now = jiffies) == start_jiffies) + cpu_relax(); + + while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) { + start_cycles = get_cycles64(); + mb(); + __riscv_copy_vec_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE); + mb(); + end_cycles = get_cycles64(); + if ((end_cycles - start_cycles) < byte_cycles) + byte_cycles = end_cycles - start_cycles; + } + + kernel_vector_end(); + + /* Don't divide by zero. */ + if (!word_cycles || !byte_cycles) { + pr_warn("cpu%d: rdtime lacks granularity needed to measure unaligned vector access speed\n", + cpu); + + return; + } + + if (word_cycles < byte_cycles) + speed = RISCV_HWPROBE_VEC_MISALIGNED_FAST; + + ratio = div_u64((byte_cycles * 100), word_cycles); + pr_info("cpu%d: Ratio of vector byte access time to vector unaligned word access is %d.%02d, unaligned accesses are %s\n", + cpu, + ratio / 100, + ratio % 100, + (speed == RISCV_HWPROBE_VEC_MISALIGNED_FAST) ? "fast" : "slow"); + + per_cpu(vector_misaligned_access, cpu) = speed; +} + +static int riscv_online_cpu_vec(unsigned int cpu) +{ + check_vector_unaligned_access(NULL); + return 0; +} + +/* Measure unaligned access speed on all CPUs present at boot in parallel. */ +static int vec_check_unaligned_access_speed_all_cpus(void *unused) +{ + schedule_on_each_cpu(check_vector_unaligned_access); + + /* + * Setup hotplug callbacks for any new CPUs that come online or go + * offline. + */ + cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online", + riscv_online_cpu_vec, NULL); + + return 0; +} +#else /* CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS */ +static int vec_check_unaligned_access_speed_all_cpus(void *unused) +{ + return 0; +} +#endif + static int check_unaligned_access_all_cpus(void) { - bool all_cpus_emulated; + bool all_cpus_emulated, all_cpus_vec_supported; int cpu; if (riscv_has_extension_unlikely(RISCV_ISA_EXT_ZICCLSM)) { @@ -285,7 +407,13 @@ static int check_unaligned_access_all_cpus(void) } all_cpus_emulated = check_unaligned_access_emulated_all_cpus(); - check_vector_unaligned_access_emulated_all_cpus(); + all_cpus_vec_supported = check_vector_unaligned_access_emulated_all_cpus(); + + if (all_cpus_vec_supported && + IS_ENABLED(CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS)) { + kthread_run(vec_check_unaligned_access_speed_all_cpus, + NULL, "vec_check_unaligned_access_speed_all_cpus"); + } if (!all_cpus_emulated) return check_unaligned_access_speed_all_cpus(); diff --git a/arch/riscv/kernel/vec-copy-unaligned.S b/arch/riscv/kernel/vec-copy-unaligned.S new file mode 100644 index 000000000000..e5bc94917e60 --- /dev/null +++ b/arch/riscv/kernel/vec-copy-unaligned.S @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2024 Rivos Inc. */ + +#include +#include +#include + + .text + +#define WORD_EEW 32 + +#define WORD_SEW CONCATENATE(e, WORD_EEW) +#define VEC_L CONCATENATE(vle, WORD_EEW).v +#define VEC_S CONCATENATE(vle, WORD_EEW).v + +/* void __riscv_copy_vec_words_unaligned(void *, const void *, size_t) */ +/* Performs a memcpy without aligning buffers, using word loads and stores. */ +/* Note: The size is truncated to a multiple of WORD_EEW */ +SYM_FUNC_START(__riscv_copy_vec_words_unaligned) + andi a4, a2, ~(WORD_EEW-1) + beqz a4, 2f + add a3, a1, a4 + .option push + .option arch, +zve32x +1: + vsetivli t0, 8, WORD_SEW, m8, ta, ma + VEC_L v0, (a1) + VEC_S v0, (a0) + addi a0, a0, WORD_EEW + addi a1, a1, WORD_EEW + bltu a1, a3, 1b + +2: + .option pop + ret +SYM_FUNC_END(__riscv_copy_vec_words_unaligned) + +/* void __riscv_copy_vec_bytes_unaligned(void *, const void *, size_t) */ +/* Performs a memcpy without aligning buffers, using only byte accesses. */ +/* Note: The size is truncated to a multiple of 8 */ +SYM_FUNC_START(__riscv_copy_vec_bytes_unaligned) + andi a4, a2, ~(8-1) + beqz a4, 2f + add a3, a1, a4 + .option push + .option arch, +zve32x +1: + vsetivli t0, 8, e8, m8, ta, ma + vle8.v v0, (a1) + vse8.v v0, (a0) + addi a0, a0, 8 + addi a1, a1, 8 + bltu a1, a3, 1b + +2: + .option pop + ret +SYM_FUNC_END(__riscv_copy_vec_bytes_unaligned) -- 2.45.2