* [PATCH v4 00/10] Coverage deduplication for KCOV
@ 2025-07-31 11:51 Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 01/10] x86: kcov: disable instrumentation of arch/x86/kernel/tsc.c Alexander Potapenko
` (9 more replies)
0 siblings, 10 replies; 13+ messages in thread
From: Alexander Potapenko @ 2025-07-31 11:51 UTC (permalink / raw)
To: glider
Cc: quic_jiangenj, linux-kernel, kasan-dev, Aleksandr Nogikh,
Andrey Konovalov, Borislav Petkov, Dave Hansen, Dmitry Vyukov,
Ingo Molnar, Josh Poimboeuf, Marco Elver, Peter Zijlstra,
Thomas Gleixner
As mentioned by Joey Jiao in [1], the current kcov implementation may
suffer from certain syscalls overflowing the userspace coverage buffer.
According to our measurements, among 24 syzkaller instances running
upstream Linux, 5 had a coverage overflow in at least 50% of executed
programs. The median percentage of programs with overflows across those 24
instances was 8.8%.
One way to mitigate this problem is to increase the size of the kcov buffer
in the userspace application using kcov. But right now syzkaller already
uses 4Mb per each of up to 32 threads to store the coverage, and increasing
it further would result in reduction in the number of executors on a single
machine. Replaying the same program with an increased buffer size in the
case of overflow would also lead to fewer executions being possible.
When executing a single system call, excessive coverage usually stems from
loops, which write the same PCs into the output buffer repeatedly. Although
collecting precise traces may give us some insights into e.g. the number of
loop iterations and the branches being taken, the fuzzing engine does not
take advantage of these signals, and recording only unique PCs should be
just as practical.
In [1] Joey Jiao suggested using a hash table to deduplicate the coverage
signal on the kernel side. While being universally applicable to all types
of data collected by kcov, this approach adds another layer of complexity,
requiring dynamically growing the map. Another problem is potential hash
collisions, which can as well lead to lost coverage. Hash maps are also
unavoidably sparse, which potentially requires more memory.
The approach proposed in this patch series is to assign a unique (and
almost) sequential ID to each of the coverage callbacks in the kernel. Then
we carve out a fixed-sized bitmap from the userspace trace buffer, and on
every callback invocation we:
- obtain the callback_ID;
- if bitmap[callback_ID] is set, append the PC to the trace buffer;
- set bitmap[callback_ID] to true.
LLVM's -fsanitize-coverage=trace-pc-guard replaces every coverage callback
in the kernel with a call to
__sanitizer_cov_trace_pc_guard(&guard_variable) , where guard_variable is a
4-byte global that is unique for the callsite.
This allows us to lazily allocate sequential numbers just for the callbacks
that have actually been executed, using a lock-free algorithm.
This patch series implements a new config, CONFIG_KCOV_ENABLE_GUARDS, which
utilizes the mentioned LLVM flag for coverage instrumentation. In addition
to the existing coverage collection modes, it introduces
ioctl(KCOV_UNIQUE_ENABLE), which splits the existing kcov buffer into the
bitmap and the trace part for a particular fuzzing session, and collects
only unique coverage in the trace buffer.
To reset the coverage between runs, it is now necessary to set trace[0] to
0 AND clear the entire bitmap. This is still considered feasible, based on
the experimental results below.
Alternatively, users can call ioctl(KCOV_RESET_TRACE) to reset the coverage.
This makes it possible to make the coverage buffer read-only, so that it
is harder to corrupt.
The current design does not address the deduplication of KCOV_TRACE_CMP
comparisons; however, the number of kcov overflows during the hints
collection process is insignificant compared to the overflows of
KCOV_TRACE_PC.
In addition to the mentioned changes, this patch series implements
a selftest in tools/testing/selftests/kcov/kcov_test. This will help
check the variety of different coverage collection modes.
Experimental results.
We've conducted an experiment running syz-testbed [3] on 10 syzkaller
instances for 24 hours. Out of those 10 instances, 5 were enabling the
kcov_deduplicate flag from [4], which makes use of the KCOV_UNIQUE_ENABLE
ioctl, reserving 4096 words (262144 bits) for the bitmap and leaving 520192
words for the trace collection.
Below are the average stats from the runs.
kcov_deduplicate=false:
corpus: 52176
coverage: 302658
cover overflows: 225288
comps overflows: 491
exec total: 1417829
max signal: 318894
kcov_deduplicate=true:
corpus: 52581
coverage: 304344
cover overflows: 986
comps overflows: 626
exec total: 1484841
max signal: 322455
[1] https://lore.kernel.org/linux-arm-kernel/20250114-kcov-v1-5-004294b931a2@quicinc.com/T/
[2] https://clang.llvm.org/docs/SanitizerCoverage.html
[3] https://github.com/google/syzkaller/tree/master/tools/syz-testbed
[4] https://github.com/ramosian-glider/syzkaller/tree/kcov_dedup-new
v4:
- fix a compilation error detected by the kernel test robot <lkp@intel.com>
- add CONFIG_KCOV_UNIQUE=y as a prerequisite for kcov_test
- Reviewed-by: tags
v3:
- drop "kcov: apply clang-format to kcov code"
- address reviewers' comments
- merge __sancov_guards into .bss
- proper testing of unique coverage in kcov_test
- fix a warning detected by the kernel test robot <lkp@intel.com>
- better comments
v2:
- assorted cleanups (enum kcov_mode, docs)
- address reviewers' comments
- drop R_X86_64_REX_GOTPCRELX support
- implement ioctl(KCOV_RESET_TRACE)
- add a userspace selftest
Alexander Potapenko (10):
x86: kcov: disable instrumentation of arch/x86/kernel/tsc.c
kcov: elaborate on using the shared buffer
kcov: factor out struct kcov_state
mm/kasan: define __asan_before_dynamic_init, __asan_after_dynamic_init
kcov: x86: introduce CONFIG_KCOV_UNIQUE
kcov: add trace and trace_size to struct kcov_state
kcov: add ioctl(KCOV_UNIQUE_ENABLE)
kcov: add ioctl(KCOV_RESET_TRACE)
kcov: selftests: add kcov_test
kcov: use enum kcov_mode in kcov_mode_enabled()
Documentation/dev-tools/kcov.rst | 124 +++++++
MAINTAINERS | 3 +
arch/x86/Kconfig | 1 +
arch/x86/kernel/Makefile | 2 +
arch/x86/kernel/vmlinux.lds.S | 1 +
include/asm-generic/vmlinux.lds.h | 13 +-
include/linux/kcov.h | 6 +-
include/linux/kcov_types.h | 37 +++
include/linux/sched.h | 13 +-
include/uapi/linux/kcov.h | 2 +
kernel/kcov.c | 368 ++++++++++++++-------
lib/Kconfig.debug | 26 ++
mm/kasan/generic.c | 24 ++
mm/kasan/kasan.h | 2 +
scripts/Makefile.kcov | 7 +
scripts/module.lds.S | 35 ++
tools/objtool/check.c | 3 +-
tools/testing/selftests/kcov/Makefile | 6 +
tools/testing/selftests/kcov/config | 2 +
tools/testing/selftests/kcov/kcov_test.c | 401 +++++++++++++++++++++++
20 files changed, 949 insertions(+), 127 deletions(-)
create mode 100644 include/linux/kcov_types.h
create mode 100644 tools/testing/selftests/kcov/Makefile
create mode 100644 tools/testing/selftests/kcov/config
create mode 100644 tools/testing/selftests/kcov/kcov_test.c
--
2.50.1.552.g942d659e1b-goog
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v4 01/10] x86: kcov: disable instrumentation of arch/x86/kernel/tsc.c
2025-07-31 11:51 [PATCH v4 00/10] Coverage deduplication for KCOV Alexander Potapenko
@ 2025-07-31 11:51 ` Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 02/10] kcov: elaborate on using the shared buffer Alexander Potapenko
` (8 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Alexander Potapenko @ 2025-07-31 11:51 UTC (permalink / raw)
To: glider
Cc: quic_jiangenj, linux-kernel, kasan-dev, Aleksandr Nogikh,
Andrey Konovalov, Borislav Petkov, Dave Hansen, Dmitry Vyukov,
Ingo Molnar, Josh Poimboeuf, Marco Elver, Peter Zijlstra,
Thomas Gleixner
sched_clock() appears to be called from interrupts, producing spurious
coverage, as reported by CONFIG_KCOV_SELFTEST:
RIP: 0010:__sanitizer_cov_trace_pc_guard+0x66/0xe0 kernel/kcov.c:288
...
fault_in_kernel_space+0x17/0x70 arch/x86/mm/fault.c:1119
handle_page_fault arch/x86/mm/fault.c:1477
exc_page_fault+0x56/0x110 arch/x86/mm/fault.c:1538
asm_exc_page_fault+0x26/0x30 ./arch/x86/include/asm/idtentry.h:623
RIP: 0010:__sanitizer_cov_trace_pc_guard+0x66/0xe0 kernel/kcov.c:288
...
sched_clock+0x12/0x70 arch/x86/kernel/tsc.c:284
__lock_pin_lock kernel/locking/lockdep.c:5628
lock_pin_lock+0xd7/0x180 kernel/locking/lockdep.c:5959
rq_pin_lock kernel/sched/sched.h:1761
rq_lock kernel/sched/sched.h:1838
__schedule+0x3a8/0x4b70 kernel/sched/core.c:6691
preempt_schedule_irq+0xbf/0x160 kernel/sched/core.c:7090
irqentry_exit+0x6f/0x90 kernel/entry/common.c:354
asm_sysvec_reschedule_ipi+0x1a/0x20 ./arch/x86/include/asm/idtentry.h:707
RIP: 0010:selftest+0x26/0x60 kernel/kcov.c:1223
...
kcov_init+0x81/0xa0 kernel/kcov.c:1252
do_one_initcall+0x2e1/0x910
do_initcall_level+0xff/0x160 init/main.c:1319
do_initcalls+0x4a/0xa0 init/main.c:1335
kernel_init_freeable+0x448/0x610 init/main.c:1567
kernel_init+0x24/0x230 init/main.c:1457
ret_from_fork+0x60/0x90 arch/x86/kernel/process.c:153
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
Signed-off-by: Alexander Potapenko <glider@google.com>
---
Change-Id: Ica191d73bf5601b31e893d6e517b91be983e986a
---
arch/x86/kernel/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 0d2a6d953be91..ca134ce03eea9 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -43,6 +43,8 @@ KCOV_INSTRUMENT_dumpstack_$(BITS).o := n
KCOV_INSTRUMENT_unwind_orc.o := n
KCOV_INSTRUMENT_unwind_frame.o := n
KCOV_INSTRUMENT_unwind_guess.o := n
+# Avoid instrumenting code that produces spurious coverage in interrupts.
+KCOV_INSTRUMENT_tsc.o := n
CFLAGS_head32.o := -fno-stack-protector
CFLAGS_head64.o := -fno-stack-protector
--
2.50.1.552.g942d659e1b-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 02/10] kcov: elaborate on using the shared buffer
2025-07-31 11:51 [PATCH v4 00/10] Coverage deduplication for KCOV Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 01/10] x86: kcov: disable instrumentation of arch/x86/kernel/tsc.c Alexander Potapenko
@ 2025-07-31 11:51 ` Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 03/10] kcov: factor out struct kcov_state Alexander Potapenko
` (7 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Alexander Potapenko @ 2025-07-31 11:51 UTC (permalink / raw)
To: glider
Cc: quic_jiangenj, linux-kernel, kasan-dev, Dmitry Vyukov,
Aleksandr Nogikh, Andrey Konovalov, Borislav Petkov, Dave Hansen,
Ingo Molnar, Josh Poimboeuf, Marco Elver, Peter Zijlstra,
Thomas Gleixner
Add a paragraph about the shared buffer usage to kcov.rst.
Signed-off-by: Alexander Potapenko <glider@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
---
v3:
- add Reviewed-by: Dmitry Vyukov
Change-Id: Ia47ef7c3fcc74789fe57a6e1d93e29a42dbc0a97
---
Documentation/dev-tools/kcov.rst | 55 ++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/Documentation/dev-tools/kcov.rst b/Documentation/dev-tools/kcov.rst
index 6611434e2dd24..abf3ad2e784e8 100644
--- a/Documentation/dev-tools/kcov.rst
+++ b/Documentation/dev-tools/kcov.rst
@@ -137,6 +137,61 @@ mmaps coverage buffer, and then forks child processes in a loop. The child
processes only need to enable coverage (it gets disabled automatically when
a thread exits).
+Shared buffer for coverage collection
+-------------------------------------
+KCOV employs a shared memory buffer as a central mechanism for efficient and
+direct transfer of code coverage information between the kernel and userspace
+applications.
+
+Calling ``ioctl(fd, KCOV_INIT_TRACE, size)`` initializes coverage collection for
+the current thread associated with the file descriptor ``fd``. The buffer
+allocated will hold ``size`` unsigned long values, as interpreted by the kernel.
+Notably, even in a 32-bit userspace program on a 64-bit kernel, each entry will
+occupy 64 bits.
+
+Following initialization, the actual shared memory buffer is created using::
+
+ mmap(NULL, size * sizeof(unsigned long), PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0)
+
+The size of this memory mapping, calculated as ``size * sizeof(unsigned long)``,
+must be a multiple of ``PAGE_SIZE``.
+
+This buffer is then shared between the kernel and the userspace. The first
+element of the buffer contains the number of PCs stored in it.
+Both the userspace and the kernel may write to the shared buffer, so to avoid
+race conditions each userspace thread should only update its own buffer.
+
+Normally the shared buffer is used as follows::
+
+ Userspace Kernel
+ -----------------------------------------+-------------------------------------------
+ ioctl(fd, KCOV_INIT_TRACE, size) |
+ | Initialize coverage for current thread
+ mmap(..., MAP_SHARED, fd, 0) |
+ | Allocate the buffer, initialize it
+ | with zeroes
+ ioctl(fd, KCOV_ENABLE, KCOV_TRACE_PC) |
+ | Enable PC collection for current thread
+ | starting at buffer[1] (KCOV_ENABLE will
+ | already write some coverage)
+ Atomically write 0 to buffer[0] to |
+ reset the coverage |
+ |
+ Execute some syscall(s) |
+ | Write new coverage starting at
+ | buffer[1]
+ Atomically read buffer[0] to get the |
+ total coverage size at this point in |
+ time |
+ |
+ ioctl(fd, KCOV_DISABLE, 0) |
+ | Write some more coverage for ioctl(),
+ | then disable PC collection for current
+ | thread
+ Safely read and process the coverage |
+ up to the buffer[0] value saved above |
+
+
Comparison operands collection
------------------------------
--
2.50.1.552.g942d659e1b-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 03/10] kcov: factor out struct kcov_state
2025-07-31 11:51 [PATCH v4 00/10] Coverage deduplication for KCOV Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 01/10] x86: kcov: disable instrumentation of arch/x86/kernel/tsc.c Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 02/10] kcov: elaborate on using the shared buffer Alexander Potapenko
@ 2025-07-31 11:51 ` Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 04/10] mm/kasan: define __asan_before_dynamic_init, __asan_after_dynamic_init Alexander Potapenko
` (6 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Alexander Potapenko @ 2025-07-31 11:51 UTC (permalink / raw)
To: glider
Cc: quic_jiangenj, linux-kernel, kasan-dev, Dmitry Vyukov,
Aleksandr Nogikh, Andrey Konovalov, Borislav Petkov, Dave Hansen,
Ingo Molnar, Josh Poimboeuf, Marco Elver, Peter Zijlstra,
Thomas Gleixner
Group several kcov-related fields (area, size, sequence) that are
stored in various structures, into `struct kcov_state`, so that
these fields can be easily passed around and manipulated.
Note that now the spinlock in struct kcov applies to every member
of struct kcov_state, including the sequence number.
This prepares us for the upcoming change that will introduce more
kcov state.
Also update the MAINTAINERS entry: add include/linux/kcov_types.h,
add myself as kcov reviewer.
Signed-off-by: Alexander Potapenko <glider@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
---
v4:
- add Reviewed-by: Dmitry Vyukov
v3:
- fix comments by Dmitry Vyukov:
- adjust a comment in sched.h
- fix incorrect parameters passed to kcov_start()
v2:
- add myself to kcov MAINTAINERS
- rename kcov-state.h to kcov_types.h
- update the description
- do not move mode into struct kcov_state
- use '{ }' instead of '{ 0 }'
Change-Id: If225682ea2f6e91245381b3270de16e7ea40df39
---
MAINTAINERS | 2 +
include/linux/kcov.h | 2 +-
include/linux/kcov_types.h | 22 ++++++++
include/linux/sched.h | 13 +----
kernel/kcov.c | 112 ++++++++++++++++---------------------
5 files changed, 77 insertions(+), 74 deletions(-)
create mode 100644 include/linux/kcov_types.h
diff --git a/MAINTAINERS b/MAINTAINERS
index c0b444e5fd5ad..6906eb9d88dae 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13008,11 +13008,13 @@ F: include/linux/kcore.h
KCOV
R: Dmitry Vyukov <dvyukov@google.com>
R: Andrey Konovalov <andreyknvl@gmail.com>
+R: Alexander Potapenko <glider@google.com>
L: kasan-dev@googlegroups.com
S: Maintained
B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
F: Documentation/dev-tools/kcov.rst
F: include/linux/kcov.h
+F: include/linux/kcov_types.h
F: include/uapi/linux/kcov.h
F: kernel/kcov.c
F: scripts/Makefile.kcov
diff --git a/include/linux/kcov.h b/include/linux/kcov.h
index 75a2fb8b16c32..2b3655c0f2278 100644
--- a/include/linux/kcov.h
+++ b/include/linux/kcov.h
@@ -2,7 +2,7 @@
#ifndef _LINUX_KCOV_H
#define _LINUX_KCOV_H
-#include <linux/sched.h>
+#include <linux/kcov_types.h>
#include <uapi/linux/kcov.h>
struct task_struct;
diff --git a/include/linux/kcov_types.h b/include/linux/kcov_types.h
new file mode 100644
index 0000000000000..53b25b6f0addd
--- /dev/null
+++ b/include/linux/kcov_types.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_KCOV_STATE_H
+#define _LINUX_KCOV_STATE_H
+
+#ifdef CONFIG_KCOV
+/* See kernel/kcov.c for more details. */
+struct kcov_state {
+ /* Size of the area (in long's). */
+ unsigned int size;
+
+ /* Buffer for coverage collection, shared with the userspace. */
+ void *area;
+
+ /*
+ * KCOV sequence number: incremented each time kcov is reenabled, used
+ * by kcov_remote_stop(), see the comment there.
+ */
+ int sequence;
+};
+#endif /* CONFIG_KCOV */
+
+#endif /* _LINUX_KCOV_STATE_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index aa9c5be7a6325..7901fece5aba3 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -42,6 +42,7 @@
#include <linux/restart_block.h>
#include <uapi/linux/rseq.h>
#include <linux/seqlock_types.h>
+#include <linux/kcov_types.h>
#include <linux/kcsan.h>
#include <linux/rv.h>
#include <linux/uidgid_types.h>
@@ -1516,16 +1517,11 @@ struct task_struct {
#endif /* CONFIG_TRACING */
#ifdef CONFIG_KCOV
- /* See kernel/kcov.c for more details. */
-
/* Coverage collection mode enabled for this task (0 if disabled): */
unsigned int kcov_mode;
- /* Size of the kcov_area: */
- unsigned int kcov_size;
-
- /* Buffer for coverage collection: */
- void *kcov_area;
+ /* KCOV buffer state for this task. */
+ struct kcov_state kcov_state;
/* KCOV descriptor wired with this task or NULL: */
struct kcov *kcov;
@@ -1533,9 +1529,6 @@ struct task_struct {
/* KCOV common handle for remote coverage collection: */
u64 kcov_handle;
- /* KCOV sequence number: */
- int kcov_sequence;
-
/* Collect coverage from softirq context: */
unsigned int kcov_softirq;
#endif
diff --git a/kernel/kcov.c b/kernel/kcov.c
index 187ba1b80bda1..5170f367c8a1b 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -23,6 +23,7 @@
#include <linux/debugfs.h>
#include <linux/uaccess.h>
#include <linux/kcov.h>
+#include <linux/kcov_types.h>
#include <linux/refcount.h>
#include <linux/log2.h>
#include <asm/setup.h>
@@ -53,24 +54,17 @@ struct kcov {
* - each code section for remote coverage collection
*/
refcount_t refcount;
- /* The lock protects mode, size, area and t. */
+ /* The lock protects mode, state and t. */
spinlock_t lock;
enum kcov_mode mode;
- /* Size of arena (in long's). */
- unsigned int size;
- /* Coverage buffer shared with user space. */
- void *area;
+ struct kcov_state state;
+
/* Task for which we collect coverage, or NULL. */
struct task_struct *t;
/* Collecting coverage from remote (background) threads. */
bool remote;
/* Size of remote area (in long's). */
unsigned int remote_size;
- /*
- * Sequence is incremented each time kcov is reenabled, used by
- * kcov_remote_stop(), see the comment there.
- */
- int sequence;
};
struct kcov_remote_area {
@@ -92,11 +86,9 @@ struct kcov_percpu_data {
void *irq_area;
local_lock_t lock;
- unsigned int saved_mode;
- unsigned int saved_size;
- void *saved_area;
+ enum kcov_mode saved_mode;
struct kcov *saved_kcov;
- int saved_sequence;
+ struct kcov_state saved_state;
};
static DEFINE_PER_CPU(struct kcov_percpu_data, kcov_percpu_data) = {
@@ -217,10 +209,10 @@ void notrace __sanitizer_cov_trace_pc(void)
if (!check_kcov_mode(KCOV_MODE_TRACE_PC, t))
return;
- area = t->kcov_area;
+ area = t->kcov_state.area;
/* The first 64-bit word is the number of subsequent PCs. */
pos = READ_ONCE(area[0]) + 1;
- if (likely(pos < t->kcov_size)) {
+ if (likely(pos < t->kcov_state.size)) {
/* Previously we write pc before updating pos. However, some
* early interrupt code could bypass check_kcov_mode() check
* and invoke __sanitizer_cov_trace_pc(). If such interrupt is
@@ -250,10 +242,10 @@ static void notrace write_comp_data(u64 type, u64 arg1, u64 arg2, u64 ip)
/*
* We write all comparison arguments and types as u64.
- * The buffer was allocated for t->kcov_size unsigned longs.
+ * The buffer was allocated for t->kcov_state.size unsigned longs.
*/
- area = (u64 *)t->kcov_area;
- max_pos = t->kcov_size * sizeof(unsigned long);
+ area = (u64 *)t->kcov_state.area;
+ max_pos = t->kcov_state.size * sizeof(unsigned long);
count = READ_ONCE(area[0]);
@@ -354,15 +346,13 @@ EXPORT_SYMBOL(__sanitizer_cov_trace_switch);
#endif /* ifdef CONFIG_KCOV_ENABLE_COMPARISONS */
static void kcov_start(struct task_struct *t, struct kcov *kcov,
- unsigned int size, void *area, enum kcov_mode mode,
- int sequence)
+ enum kcov_mode mode, struct kcov_state *state)
{
- kcov_debug("t = %px, size = %u, area = %px\n", t, size, area);
+ kcov_debug("t = %px, size = %u, area = %px\n", t, state->size,
+ state->area);
t->kcov = kcov;
/* Cache in task struct for performance. */
- t->kcov_size = size;
- t->kcov_area = area;
- t->kcov_sequence = sequence;
+ t->kcov_state = *state;
/* See comment in check_kcov_mode(). */
barrier();
WRITE_ONCE(t->kcov_mode, mode);
@@ -373,14 +363,14 @@ static void kcov_stop(struct task_struct *t)
WRITE_ONCE(t->kcov_mode, KCOV_MODE_DISABLED);
barrier();
t->kcov = NULL;
- t->kcov_size = 0;
- t->kcov_area = NULL;
+ t->kcov_state.size = 0;
+ t->kcov_state.area = NULL;
}
static void kcov_task_reset(struct task_struct *t)
{
kcov_stop(t);
- t->kcov_sequence = 0;
+ t->kcov_state.sequence = 0;
t->kcov_handle = 0;
}
@@ -396,7 +386,7 @@ static void kcov_reset(struct kcov *kcov)
kcov->mode = KCOV_MODE_INIT;
kcov->remote = false;
kcov->remote_size = 0;
- kcov->sequence++;
+ kcov->state.sequence++;
}
static void kcov_remote_reset(struct kcov *kcov)
@@ -436,7 +426,7 @@ static void kcov_put(struct kcov *kcov)
{
if (refcount_dec_and_test(&kcov->refcount)) {
kcov_remote_reset(kcov);
- vfree(kcov->area);
+ vfree(kcov->state.area);
kfree(kcov);
}
}
@@ -493,8 +483,8 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
unsigned long flags;
spin_lock_irqsave(&kcov->lock, flags);
- size = kcov->size * sizeof(unsigned long);
- if (kcov->area == NULL || vma->vm_pgoff != 0 ||
+ size = kcov->state.size * sizeof(unsigned long);
+ if (kcov->state.area == NULL || vma->vm_pgoff != 0 ||
vma->vm_end - vma->vm_start != size) {
res = -EINVAL;
goto exit;
@@ -502,7 +492,7 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
spin_unlock_irqrestore(&kcov->lock, flags);
vm_flags_set(vma, VM_DONTEXPAND);
for (off = 0; off < size; off += PAGE_SIZE) {
- page = vmalloc_to_page(kcov->area + off);
+ page = vmalloc_to_page(kcov->state.area + off);
res = vm_insert_page(vma, vma->vm_start + off, page);
if (res) {
pr_warn_once("kcov: vm_insert_page() failed\n");
@@ -523,7 +513,7 @@ static int kcov_open(struct inode *inode, struct file *filep)
if (!kcov)
return -ENOMEM;
kcov->mode = KCOV_MODE_DISABLED;
- kcov->sequence = 1;
+ kcov->state.sequence = 1;
refcount_set(&kcov->refcount, 1);
spin_lock_init(&kcov->lock);
filep->private_data = kcov;
@@ -558,10 +548,10 @@ static int kcov_get_mode(unsigned long arg)
static void kcov_fault_in_area(struct kcov *kcov)
{
unsigned long stride = PAGE_SIZE / sizeof(unsigned long);
- unsigned long *area = kcov->area;
+ unsigned long *area = kcov->state.area;
unsigned long offset;
- for (offset = 0; offset < kcov->size; offset += stride)
+ for (offset = 0; offset < kcov->state.size; offset += stride)
READ_ONCE(area[offset]);
}
@@ -600,7 +590,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
* at task exit or voluntary by KCOV_DISABLE. After that it can
* be enabled for another task.
*/
- if (kcov->mode != KCOV_MODE_INIT || !kcov->area)
+ if (kcov->mode != KCOV_MODE_INIT || !kcov->state.area)
return -EINVAL;
t = current;
if (kcov->t != NULL || t->kcov != NULL)
@@ -610,8 +600,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
return mode;
kcov_fault_in_area(kcov);
kcov->mode = mode;
- kcov_start(t, kcov, kcov->size, kcov->area, kcov->mode,
- kcov->sequence);
+ kcov_start(t, kcov, mode, &kcov->state);
kcov->t = t;
/* Put either in kcov_task_exit() or in KCOV_DISABLE. */
kcov_get(kcov);
@@ -628,7 +617,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
kcov_put(kcov);
return 0;
case KCOV_REMOTE_ENABLE:
- if (kcov->mode != KCOV_MODE_INIT || !kcov->area)
+ if (kcov->mode != KCOV_MODE_INIT || !kcov->state.area)
return -EINVAL;
t = current;
if (kcov->t != NULL || t->kcov != NULL)
@@ -722,8 +711,8 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
vfree(area);
return -EBUSY;
}
- kcov->area = area;
- kcov->size = size;
+ kcov->state.area = area;
+ kcov->state.size = size;
kcov->mode = KCOV_MODE_INIT;
spin_unlock_irqrestore(&kcov->lock, flags);
return 0;
@@ -821,10 +810,8 @@ static void kcov_remote_softirq_start(struct task_struct *t)
mode = READ_ONCE(t->kcov_mode);
barrier();
if (kcov_mode_enabled(mode)) {
+ data->saved_state = t->kcov_state;
data->saved_mode = mode;
- data->saved_size = t->kcov_size;
- data->saved_area = t->kcov_area;
- data->saved_sequence = t->kcov_sequence;
data->saved_kcov = t->kcov;
kcov_stop(t);
}
@@ -835,13 +822,9 @@ static void kcov_remote_softirq_stop(struct task_struct *t)
struct kcov_percpu_data *data = this_cpu_ptr(&kcov_percpu_data);
if (data->saved_kcov) {
- kcov_start(t, data->saved_kcov, data->saved_size,
- data->saved_area, data->saved_mode,
- data->saved_sequence);
- data->saved_mode = 0;
- data->saved_size = 0;
- data->saved_area = NULL;
- data->saved_sequence = 0;
+ kcov_start(t, data->saved_kcov, data->saved_mode,
+ &data->saved_state);
+ data->saved_state = (struct kcov_state){};
data->saved_kcov = NULL;
}
}
@@ -850,12 +833,12 @@ void kcov_remote_start(u64 handle)
{
struct task_struct *t = current;
struct kcov_remote *remote;
+ struct kcov_state state;
+ enum kcov_mode mode;
+ unsigned long flags;
+ unsigned int size;
struct kcov *kcov;
- unsigned int mode;
void *area;
- unsigned int size;
- int sequence;
- unsigned long flags;
if (WARN_ON(!kcov_check_handle(handle, true, true, true)))
return;
@@ -900,7 +883,7 @@ void kcov_remote_start(u64 handle)
* KCOV_DISABLE / kcov_remote_reset().
*/
mode = kcov->mode;
- sequence = kcov->sequence;
+ state.sequence = kcov->state.sequence;
if (in_task()) {
size = kcov->remote_size;
area = kcov_remote_area_get(size);
@@ -923,12 +906,14 @@ void kcov_remote_start(u64 handle)
/* Reset coverage size. */
*(u64 *)area = 0;
+ state.area = area;
+ state.size = size;
if (in_serving_softirq()) {
kcov_remote_softirq_start(t);
t->kcov_softirq = 1;
}
- kcov_start(t, kcov, size, area, mode, sequence);
+ kcov_start(t, kcov, mode, &state);
local_unlock_irqrestore(&kcov_percpu_data.lock, flags);
@@ -1027,9 +1012,9 @@ void kcov_remote_stop(void)
}
kcov = t->kcov;
- area = t->kcov_area;
- size = t->kcov_size;
- sequence = t->kcov_sequence;
+ area = t->kcov_state.area;
+ size = t->kcov_state.size;
+ sequence = t->kcov_state.sequence;
kcov_stop(t);
if (in_serving_softirq()) {
@@ -1042,8 +1027,9 @@ void kcov_remote_stop(void)
* KCOV_DISABLE could have been called between kcov_remote_start()
* and kcov_remote_stop(), hence the sequence check.
*/
- if (sequence == kcov->sequence && kcov->remote)
- kcov_move_area(kcov->mode, kcov->area, kcov->size, area);
+ if (sequence == kcov->state.sequence && kcov->remote)
+ kcov_move_area(kcov->mode, kcov->state.area, kcov->state.size,
+ area);
spin_unlock(&kcov->lock);
if (in_task()) {
--
2.50.1.552.g942d659e1b-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 04/10] mm/kasan: define __asan_before_dynamic_init, __asan_after_dynamic_init
2025-07-31 11:51 [PATCH v4 00/10] Coverage deduplication for KCOV Alexander Potapenko
` (2 preceding siblings ...)
2025-07-31 11:51 ` [PATCH v4 03/10] kcov: factor out struct kcov_state Alexander Potapenko
@ 2025-07-31 11:51 ` Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 05/10] kcov: x86: introduce CONFIG_KCOV_UNIQUE Alexander Potapenko
` (5 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Alexander Potapenko @ 2025-07-31 11:51 UTC (permalink / raw)
To: glider
Cc: quic_jiangenj, linux-kernel, kasan-dev, Dmitry Vyukov,
Aleksandr Nogikh, Andrey Konovalov, Borislav Petkov, Dave Hansen,
Ingo Molnar, Josh Poimboeuf, Marco Elver, Peter Zijlstra,
Thomas Gleixner
Calls to __asan_before_dynamic_init() and __asan_after_dynamic_init()
are inserted by Clang when building with coverage guards.
These functions can be used to detect initialization order fiasco bugs
in the userspace, but it is fine for them to be no-ops in the kernel.
Signed-off-by: Alexander Potapenko <glider@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
---
v4:
- Fix a compilation error reported by the kernel test robot <lkp@intel.com>
v3:
- add Reviewed-by: Dmitry Vyukov
v2:
- Address comments by Dmitry Vyukov:
- rename CONFIG_KCOV_ENABLE_GUARDS to CONFIG_KCOV_UNIQUE
- Move this patch before the one introducing CONFIG_KCOV_UNIQUE,
per Marco Elver's request.
Change-Id: I7f8eb690a3d96f7d122205e8f1cba8039f6a68eb
fixup asan_before
Change-Id: If653ba4f160414cafe65eee530b6b67e5b5b547c
---
mm/kasan/generic.c | 24 ++++++++++++++++++++++++
mm/kasan/kasan.h | 2 ++
2 files changed, 26 insertions(+)
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index d54e89f8c3e76..b43ac17b7c926 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -238,6 +238,30 @@ void __asan_unregister_globals(void *ptr, ssize_t size)
}
EXPORT_SYMBOL(__asan_unregister_globals);
+#if defined(CONFIG_KCOV_UNIQUE)
+/*
+ * __asan_before_dynamic_init() and __asan_after_dynamic_init() are inserted
+ * when the user requests building with coverage guards. In the userspace, these
+ * two functions can be used to detect initialization order fiasco bugs, but in
+ * the kernel they can be no-ops.
+ *
+ * There is an inconsistency between how Clang and GCC emit calls to this
+ * function, with Clang expecting the parameter to be i64, whereas GCC wants it
+ * to be const void *.
+ * We pick the latter option, because Clang does not care, and GCC prints a
+ * warning with -Wbuiltin-declaration-mismatch.
+ */
+void __asan_before_dynamic_init(const void *module_name)
+{
+}
+EXPORT_SYMBOL(__asan_before_dynamic_init);
+
+void __asan_after_dynamic_init(void)
+{
+}
+EXPORT_SYMBOL(__asan_after_dynamic_init);
+#endif
+
#define DEFINE_ASAN_LOAD_STORE(size) \
void __asan_load##size(void *addr) \
{ \
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 129178be5e649..d23fcac9e0c12 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -582,6 +582,8 @@ void kasan_restore_multi_shot(bool enabled);
void __asan_register_globals(void *globals, ssize_t size);
void __asan_unregister_globals(void *globals, ssize_t size);
+void __asan_before_dynamic_init(const void *module_name);
+void __asan_after_dynamic_init(void);
void __asan_handle_no_return(void);
void __asan_alloca_poison(void *, ssize_t size);
void __asan_allocas_unpoison(void *stack_top, ssize_t stack_bottom);
--
2.50.1.552.g942d659e1b-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 05/10] kcov: x86: introduce CONFIG_KCOV_UNIQUE
2025-07-31 11:51 [PATCH v4 00/10] Coverage deduplication for KCOV Alexander Potapenko
` (3 preceding siblings ...)
2025-07-31 11:51 ` [PATCH v4 04/10] mm/kasan: define __asan_before_dynamic_init, __asan_after_dynamic_init Alexander Potapenko
@ 2025-07-31 11:51 ` Alexander Potapenko
2025-08-26 8:14 ` Joey Jiao
2025-07-31 11:51 ` [PATCH v4 06/10] kcov: add trace and trace_size to struct kcov_state Alexander Potapenko
` (4 subsequent siblings)
9 siblings, 1 reply; 13+ messages in thread
From: Alexander Potapenko @ 2025-07-31 11:51 UTC (permalink / raw)
To: glider
Cc: quic_jiangenj, linux-kernel, kasan-dev, x86, Dmitry Vyukov,
Aleksandr Nogikh, Andrey Konovalov, Borislav Petkov, Dave Hansen,
Ingo Molnar, Josh Poimboeuf, Marco Elver, Peter Zijlstra,
Thomas Gleixner
The new config switches coverage instrumentation to using
__sanitizer_cov_trace_pc_guard(u32 *guard)
instead of
__sanitizer_cov_trace_pc(void)
This relies on Clang's -fsanitize-coverage=trace-pc-guard flag [1].
Each callback receives a unique 32-bit guard variable residing in .bss.
Those guards can be used by kcov to deduplicate the coverage on the fly.
As a first step, we make the new instrumentation mode 1:1 compatible
with the old one.
[1] https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-pcs-with-guards
Cc: x86@kernel.org
Signed-off-by: Alexander Potapenko <glider@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
---
v4:
- add Reviewed-by: Dmitry Vyukov
v3:
- per Dmitry Vyukov's request, add better comments in
scripts/module.lds.S and lib/Kconfig.debug
- add -sanitizer-coverage-drop-ctors to scripts/Makefile.kcov
to drop the unwanted constructors emitting unsupported relocations
- merge the __sancov_guards section into .bss
v2:
- Address comments by Dmitry Vyukov
- rename CONFIG_KCOV_ENABLE_GUARDS to CONFIG_KCOV_UNIQUE
- update commit description and config description
- Address comments by Marco Elver
- rename sanitizer_cov_write_subsequent() to kcov_append_to_buffer()
- make config depend on X86_64 (via ARCH_HAS_KCOV_UNIQUE)
- swap #ifdef branches
- tweak config description
- remove redundant check for CONFIG_CC_HAS_SANCOV_TRACE_PC_GUARD
Change-Id: Iacb1e71fd061a82c2acadf2347bba4863b9aec39
---
arch/x86/Kconfig | 1 +
arch/x86/kernel/vmlinux.lds.S | 1 +
include/asm-generic/vmlinux.lds.h | 13 ++++++-
include/linux/kcov.h | 2 +
kernel/kcov.c | 61 +++++++++++++++++++++----------
lib/Kconfig.debug | 26 +++++++++++++
scripts/Makefile.kcov | 7 ++++
scripts/module.lds.S | 35 ++++++++++++++++++
tools/objtool/check.c | 1 +
9 files changed, 126 insertions(+), 21 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8bed9030ad473..0533070d24fe7 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -94,6 +94,7 @@ config X86
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_KCOV if X86_64
+ select ARCH_HAS_KCOV_UNIQUE if X86_64
select ARCH_HAS_KERNEL_FPU_SUPPORT
select ARCH_HAS_MEM_ENCRYPT
select ARCH_HAS_MEMBARRIER_SYNC_CORE
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 4fa0be732af10..52fe6539b9c91 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -372,6 +372,7 @@ SECTIONS
. = ALIGN(PAGE_SIZE);
*(BSS_MAIN)
BSS_DECRYPTED
+ BSS_SANCOV_GUARDS
. = ALIGN(PAGE_SIZE);
__bss_stop = .;
}
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index fa5f19b8d53a0..ee78328eecade 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -102,7 +102,8 @@
* sections to be brought in with rodata.
*/
#if defined(CONFIG_LD_DEAD_CODE_DATA_ELIMINATION) || defined(CONFIG_LTO_CLANG) || \
-defined(CONFIG_AUTOFDO_CLANG) || defined(CONFIG_PROPELLER_CLANG)
+ defined(CONFIG_AUTOFDO_CLANG) || defined(CONFIG_PROPELLER_CLANG) || \
+ defined(CONFIG_KCOV_UNIQUE)
#define TEXT_MAIN .text .text.[0-9a-zA-Z_]*
#else
#define TEXT_MAIN .text
@@ -121,6 +122,16 @@ defined(CONFIG_AUTOFDO_CLANG) || defined(CONFIG_PROPELLER_CLANG)
#define SBSS_MAIN .sbss
#endif
+#if defined(CONFIG_KCOV_UNIQUE)
+/* BSS_SANCOV_GUARDS must be part of the .bss section so that it is zero-initialized. */
+#define BSS_SANCOV_GUARDS \
+ __start___sancov_guards = .; \
+ *(__sancov_guards); \
+ __stop___sancov_guards = .;
+#else
+#define BSS_SANCOV_GUARDS
+#endif
+
/*
* GCC 4.5 and later have a 32 bytes section alignment for structures.
* Except GCC 4.9, that feels the need to align on 64 bytes.
diff --git a/include/linux/kcov.h b/include/linux/kcov.h
index 2b3655c0f2278..2acccfa5ae9af 100644
--- a/include/linux/kcov.h
+++ b/include/linux/kcov.h
@@ -107,6 +107,8 @@ typedef unsigned long long kcov_u64;
#endif
void __sanitizer_cov_trace_pc(void);
+void __sanitizer_cov_trace_pc_guard(u32 *guard);
+void __sanitizer_cov_trace_pc_guard_init(uint32_t *start, uint32_t *stop);
void __sanitizer_cov_trace_cmp1(u8 arg1, u8 arg2);
void __sanitizer_cov_trace_cmp2(u16 arg1, u16 arg2);
void __sanitizer_cov_trace_cmp4(u32 arg1, u32 arg2);
diff --git a/kernel/kcov.c b/kernel/kcov.c
index 5170f367c8a1b..8154ac1c1622e 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -194,27 +194,15 @@ static notrace unsigned long canonicalize_ip(unsigned long ip)
return ip;
}
-/*
- * Entry point from instrumented code.
- * This is called once per basic-block/edge.
- */
-void notrace __sanitizer_cov_trace_pc(void)
+static notrace void kcov_append_to_buffer(unsigned long *area, int size,
+ unsigned long ip)
{
- struct task_struct *t;
- unsigned long *area;
- unsigned long ip = canonicalize_ip(_RET_IP_);
- unsigned long pos;
-
- t = current;
- if (!check_kcov_mode(KCOV_MODE_TRACE_PC, t))
- return;
-
- area = t->kcov_state.area;
/* The first 64-bit word is the number of subsequent PCs. */
- pos = READ_ONCE(area[0]) + 1;
- if (likely(pos < t->kcov_state.size)) {
- /* Previously we write pc before updating pos. However, some
- * early interrupt code could bypass check_kcov_mode() check
+ unsigned long pos = READ_ONCE(area[0]) + 1;
+
+ if (likely(pos < size)) {
+ /*
+ * Some early interrupt code could bypass check_kcov_mode() check
* and invoke __sanitizer_cov_trace_pc(). If such interrupt is
* raised between writing pc and updating pos, the pc could be
* overitten by the recursive __sanitizer_cov_trace_pc().
@@ -225,7 +213,40 @@ void notrace __sanitizer_cov_trace_pc(void)
area[pos] = ip;
}
}
+
+/*
+ * Entry point from instrumented code.
+ * This is called once per basic-block/edge.
+ */
+#ifdef CONFIG_KCOV_UNIQUE
+void notrace __sanitizer_cov_trace_pc_guard(u32 *guard)
+{
+ if (!check_kcov_mode(KCOV_MODE_TRACE_PC, current))
+ return;
+
+ kcov_append_to_buffer(current->kcov_state.area,
+ current->kcov_state.size,
+ canonicalize_ip(_RET_IP_));
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_pc_guard);
+
+void notrace __sanitizer_cov_trace_pc_guard_init(uint32_t *start,
+ uint32_t *stop)
+{
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_pc_guard_init);
+#else /* !CONFIG_KCOV_UNIQUE */
+void notrace __sanitizer_cov_trace_pc(void)
+{
+ if (!check_kcov_mode(KCOV_MODE_TRACE_PC, current))
+ return;
+
+ kcov_append_to_buffer(current->kcov_state.area,
+ current->kcov_state.size,
+ canonicalize_ip(_RET_IP_));
+}
EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
+#endif
#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
static void notrace write_comp_data(u64 type, u64 arg1, u64 arg2, u64 ip)
@@ -253,7 +274,7 @@ static void notrace write_comp_data(u64 type, u64 arg1, u64 arg2, u64 ip)
start_index = 1 + count * KCOV_WORDS_PER_CMP;
end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
if (likely(end_pos <= max_pos)) {
- /* See comment in __sanitizer_cov_trace_pc(). */
+ /* See comment in kcov_append_to_buffer(). */
WRITE_ONCE(area[0], count + 1);
barrier();
area[start_index] = type;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ebe33181b6e6e..a7441f89465f3 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -2153,6 +2153,12 @@ config ARCH_HAS_KCOV
build and run with CONFIG_KCOV. This typically requires
disabling instrumentation for some early boot code.
+config CC_HAS_SANCOV_TRACE_PC
+ def_bool $(cc-option,-fsanitize-coverage=trace-pc)
+
+config CC_HAS_SANCOV_TRACE_PC_GUARD
+ def_bool $(cc-option,-fsanitize-coverage=trace-pc-guard)
+
config KCOV
bool "Code coverage for fuzzing"
depends on ARCH_HAS_KCOV
@@ -2166,6 +2172,26 @@ config KCOV
For more details, see Documentation/dev-tools/kcov.rst.
+config ARCH_HAS_KCOV_UNIQUE
+ bool
+ help
+ An architecture should select this when it can successfully
+ build and run with CONFIG_KCOV_UNIQUE.
+
+config KCOV_UNIQUE
+ depends on KCOV
+ depends on CC_HAS_SANCOV_TRACE_PC_GUARD && ARCH_HAS_KCOV_UNIQUE
+ bool "Enable unique program counter collection mode for KCOV"
+ help
+ This option enables KCOV's unique program counter (PC) collection mode,
+ which deduplicates PCs on the fly when the KCOV_UNIQUE_ENABLE ioctl is
+ used.
+
+ This significantly reduces the memory footprint for coverage data
+ collection compared to trace mode, as it prevents the kernel from
+ storing the same PC multiple times.
+ Enabling this mode incurs a slight increase in kernel binary size.
+
config KCOV_ENABLE_COMPARISONS
bool "Enable comparison operands collection by KCOV"
depends on KCOV
diff --git a/scripts/Makefile.kcov b/scripts/Makefile.kcov
index 78305a84ba9d2..c3ad5504f5600 100644
--- a/scripts/Makefile.kcov
+++ b/scripts/Makefile.kcov
@@ -1,5 +1,12 @@
# SPDX-License-Identifier: GPL-2.0-only
+ifeq ($(CONFIG_KCOV_UNIQUE),y)
+kcov-flags-y += -fsanitize-coverage=trace-pc-guard
+# Drop per-file constructors that -fsanitize-coverage=trace-pc-guard inserts by default.
+# Kernel does not need them, and they may produce unknown relocations.
+kcov-flags-y += -mllvm -sanitizer-coverage-drop-ctors
+else
kcov-flags-y += -fsanitize-coverage=trace-pc
+endif
kcov-flags-$(CONFIG_KCOV_ENABLE_COMPARISONS) += -fsanitize-coverage=trace-cmp
kcov-rflags-y += -Cpasses=sancov-module
diff --git a/scripts/module.lds.S b/scripts/module.lds.S
index 450f1088d5fd3..17f36d5112c5d 100644
--- a/scripts/module.lds.S
+++ b/scripts/module.lds.S
@@ -47,6 +47,7 @@ SECTIONS {
.bss : {
*(.bss .bss.[0-9a-zA-Z_]*)
*(.bss..L*)
+ *(__sancov_guards)
}
.data : {
@@ -64,6 +65,40 @@ SECTIONS {
MOD_CODETAG_SECTIONS()
}
#endif
+
+#ifdef CONFIG_KCOV_UNIQUE
+ /*
+ * CONFIG_KCOV_UNIQUE creates COMDAT groups for instrumented functions,
+ * which has the following consequences in the presence of
+ * -ffunction-sections:
+ * - Separate .init.text and .exit.text sections in the modules are not
+ * merged together, which results in errors trying to create
+ * duplicate entries in /sys/module/MODNAME/sections/ at module load
+ * time.
+ * - Each function is placed in a separate .text.funcname section, so
+ * there is no .text section anymore. Collecting them together here
+ * has mostly aesthetic purpose, although some tools may be expecting
+ * it to be present.
+ */
+ .text : {
+ *(.text .text.[0-9a-zA-Z_]*)
+ *(.text..L*)
+ }
+ .init.text : {
+ *(.init.text .init.text.[0-9a-zA-Z_]*)
+ *(.init.text..L*)
+ }
+ .exit.text : {
+ *(.exit.text .exit.text.[0-9a-zA-Z_]*)
+ *(.exit.text..L*)
+ }
+ .bss : {
+ *(.bss .bss.[0-9a-zA-Z_]*)
+ *(.bss..L*)
+ *(__sancov_guards)
+ }
+#endif
+
MOD_SEPARATE_CODETAG_SECTIONS()
}
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 67d76f3a1dce5..60eb5faa27d28 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1156,6 +1156,7 @@ static const char *uaccess_safe_builtin[] = {
"write_comp_data",
"check_kcov_mode",
"__sanitizer_cov_trace_pc",
+ "__sanitizer_cov_trace_pc_guard",
"__sanitizer_cov_trace_const_cmp1",
"__sanitizer_cov_trace_const_cmp2",
"__sanitizer_cov_trace_const_cmp4",
--
2.50.1.552.g942d659e1b-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 06/10] kcov: add trace and trace_size to struct kcov_state
2025-07-31 11:51 [PATCH v4 00/10] Coverage deduplication for KCOV Alexander Potapenko
` (4 preceding siblings ...)
2025-07-31 11:51 ` [PATCH v4 05/10] kcov: x86: introduce CONFIG_KCOV_UNIQUE Alexander Potapenko
@ 2025-07-31 11:51 ` Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 07/10] kcov: add ioctl(KCOV_UNIQUE_ENABLE) Alexander Potapenko
` (3 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Alexander Potapenko @ 2025-07-31 11:51 UTC (permalink / raw)
To: glider
Cc: quic_jiangenj, linux-kernel, kasan-dev, Dmitry Vyukov,
Aleksandr Nogikh, Andrey Konovalov, Borislav Petkov, Dave Hansen,
Ingo Molnar, Josh Poimboeuf, Marco Elver, Peter Zijlstra,
Thomas Gleixner
Keep kcov_state.area as the pointer to the memory buffer used by
kcov and shared with the userspace. Store the pointer to the trace
(part of the buffer holding sequential events) separately, as we will
be splitting that buffer in multiple parts.
No functional changes so far.
Signed-off-by: Alexander Potapenko <glider@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
---
v4:
- add Reviewed-by: Dmitry Vyukov
v3:
- Fix a warning detected by the kernel test robot <lkp@intel.com>
- Address comments by Dmitry Vyukov:
- s/kcov/KCOV/
- fix struct initialization style
v2:
- Address comments by Dmitry Vyukov:
- tweak commit description
- Address comments by Marco Elver:
- rename sanitizer_cov_write_subsequent() to kcov_append_to_buffer()
- Update code to match the new description of struct kcov_state
Change-Id: I50b5589ef0e0b6726aa0579334093c648f76790a
---
include/linux/kcov_types.h | 9 ++++++-
kernel/kcov.c | 48 +++++++++++++++++++++-----------------
2 files changed, 35 insertions(+), 22 deletions(-)
diff --git a/include/linux/kcov_types.h b/include/linux/kcov_types.h
index 53b25b6f0addd..9d38a2020b099 100644
--- a/include/linux/kcov_types.h
+++ b/include/linux/kcov_types.h
@@ -7,9 +7,16 @@
struct kcov_state {
/* Size of the area (in long's). */
unsigned int size;
+ /*
+ * Pointer to user-provided memory used by KCOV. This memory may
+ * contain multiple buffers.
+ */
+ void *area;
+ /* Size of the trace (in long's). */
+ unsigned int trace_size;
/* Buffer for coverage collection, shared with the userspace. */
- void *area;
+ unsigned long *trace;
/*
* KCOV sequence number: incremented each time kcov is reenabled, used
diff --git a/kernel/kcov.c b/kernel/kcov.c
index 8154ac1c1622e..2005fc7f578ee 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -194,11 +194,11 @@ static notrace unsigned long canonicalize_ip(unsigned long ip)
return ip;
}
-static notrace void kcov_append_to_buffer(unsigned long *area, int size,
+static notrace void kcov_append_to_buffer(unsigned long *trace, int size,
unsigned long ip)
{
/* The first 64-bit word is the number of subsequent PCs. */
- unsigned long pos = READ_ONCE(area[0]) + 1;
+ unsigned long pos = READ_ONCE(trace[0]) + 1;
if (likely(pos < size)) {
/*
@@ -208,9 +208,9 @@ static notrace void kcov_append_to_buffer(unsigned long *area, int size,
* overitten by the recursive __sanitizer_cov_trace_pc().
* Update pos before writing pc to avoid such interleaving.
*/
- WRITE_ONCE(area[0], pos);
+ WRITE_ONCE(trace[0], pos);
barrier();
- area[pos] = ip;
+ trace[pos] = ip;
}
}
@@ -224,8 +224,8 @@ void notrace __sanitizer_cov_trace_pc_guard(u32 *guard)
if (!check_kcov_mode(KCOV_MODE_TRACE_PC, current))
return;
- kcov_append_to_buffer(current->kcov_state.area,
- current->kcov_state.size,
+ kcov_append_to_buffer(current->kcov_state.trace,
+ current->kcov_state.trace_size,
canonicalize_ip(_RET_IP_));
}
EXPORT_SYMBOL(__sanitizer_cov_trace_pc_guard);
@@ -241,8 +241,8 @@ void notrace __sanitizer_cov_trace_pc(void)
if (!check_kcov_mode(KCOV_MODE_TRACE_PC, current))
return;
- kcov_append_to_buffer(current->kcov_state.area,
- current->kcov_state.size,
+ kcov_append_to_buffer(current->kcov_state.trace,
+ current->kcov_state.trace_size,
canonicalize_ip(_RET_IP_));
}
EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
@@ -251,9 +251,9 @@ EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
static void notrace write_comp_data(u64 type, u64 arg1, u64 arg2, u64 ip)
{
- struct task_struct *t;
- u64 *area;
u64 count, start_index, end_pos, max_pos;
+ struct task_struct *t;
+ u64 *trace;
t = current;
if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
@@ -265,22 +265,22 @@ static void notrace write_comp_data(u64 type, u64 arg1, u64 arg2, u64 ip)
* We write all comparison arguments and types as u64.
* The buffer was allocated for t->kcov_state.size unsigned longs.
*/
- area = (u64 *)t->kcov_state.area;
+ trace = (u64 *)t->kcov_state.trace;
max_pos = t->kcov_state.size * sizeof(unsigned long);
- count = READ_ONCE(area[0]);
+ count = READ_ONCE(trace[0]);
/* Every record is KCOV_WORDS_PER_CMP 64-bit words. */
start_index = 1 + count * KCOV_WORDS_PER_CMP;
end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
if (likely(end_pos <= max_pos)) {
/* See comment in kcov_append_to_buffer(). */
- WRITE_ONCE(area[0], count + 1);
+ WRITE_ONCE(trace[0], count + 1);
barrier();
- area[start_index] = type;
- area[start_index + 1] = arg1;
- area[start_index + 2] = arg2;
- area[start_index + 3] = ip;
+ trace[start_index] = type;
+ trace[start_index + 1] = arg1;
+ trace[start_index + 2] = arg2;
+ trace[start_index + 3] = ip;
}
}
@@ -381,11 +381,13 @@ static void kcov_start(struct task_struct *t, struct kcov *kcov,
static void kcov_stop(struct task_struct *t)
{
+ int saved_sequence = t->kcov_state.sequence;
+
WRITE_ONCE(t->kcov_mode, KCOV_MODE_DISABLED);
barrier();
t->kcov = NULL;
- t->kcov_state.size = 0;
- t->kcov_state.area = NULL;
+ t->kcov_state = (typeof(t->kcov_state)){};
+ t->kcov_state.sequence = saved_sequence;
}
static void kcov_task_reset(struct task_struct *t)
@@ -734,6 +736,8 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
}
kcov->state.area = area;
kcov->state.size = size;
+ kcov->state.trace = area;
+ kcov->state.trace_size = size;
kcov->mode = KCOV_MODE_INIT;
spin_unlock_irqrestore(&kcov->lock, flags);
return 0;
@@ -925,10 +929,12 @@ void kcov_remote_start(u64 handle)
local_lock_irqsave(&kcov_percpu_data.lock, flags);
}
- /* Reset coverage size. */
- *(u64 *)area = 0;
state.area = area;
state.size = size;
+ state.trace = area;
+ state.trace_size = size;
+ /* Reset coverage size. */
+ state.trace[0] = 0;
if (in_serving_softirq()) {
kcov_remote_softirq_start(t);
--
2.50.1.552.g942d659e1b-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 07/10] kcov: add ioctl(KCOV_UNIQUE_ENABLE)
2025-07-31 11:51 [PATCH v4 00/10] Coverage deduplication for KCOV Alexander Potapenko
` (5 preceding siblings ...)
2025-07-31 11:51 ` [PATCH v4 06/10] kcov: add trace and trace_size to struct kcov_state Alexander Potapenko
@ 2025-07-31 11:51 ` Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 08/10] kcov: add ioctl(KCOV_RESET_TRACE) Alexander Potapenko
` (2 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Alexander Potapenko @ 2025-07-31 11:51 UTC (permalink / raw)
To: glider
Cc: quic_jiangenj, linux-kernel, kasan-dev, Dmitry Vyukov,
Aleksandr Nogikh, Andrey Konovalov, Borislav Petkov, Dave Hansen,
Ingo Molnar, Josh Poimboeuf, Marco Elver, Peter Zijlstra,
Thomas Gleixner
ioctl(KCOV_UNIQUE_ENABLE) enables collection of deduplicated coverage
in the presence of CONFIG_KCOV_ENABLE_GUARDS.
The buffer shared with the userspace is divided in two parts, one holding
a bitmap, and the other one being the trace. The single parameter of
ioctl(KCOV_UNIQUE_ENABLE) determines the number of words used for the
bitmap.
Each __sanitizer_cov_trace_pc_guard() instrumentation hook receives a
pointer to a unique guard variable. Upon the first call of each hook,
the guard variable is initialized with a unique integer, which is used to
map those hooks to bits in the bitmap. In the new coverage collection mode,
the kernel first checks whether the bit corresponding to a particular hook
is set, and then, if it is not, the PC is written into the trace buffer,
and the bit is set.
Note: when CONFIG_KCOV_ENABLE_GUARDS is disabled, ioctl(KCOV_UNIQUE_ENABLE)
returns -ENOTSUPP, which is consistent with the existing kcov code.
Measuring the exact performance impact of this mode directly can be
challenging. However, based on fuzzing experiments (50 instances x 24h
with and without deduplication), we observe the following:
- When normalized by pure fuzzing time, total executions decreased
by 2.1% (p=0.01).
- When normalized by fuzzer uptime, the reduction in total executions
was statistically insignificant (-1.0% with p=0.20).
Despite a potential slight slowdown in execution count, the new mode
positively impacts fuzzing effectiveness:
- Statistically significant increase in corpus size (+0.6%, p<0.01).
- Statistically significant increase in coverage (+0.6%, p<0.01).
- A 99.8% reduction in coverage overflows.
Also update the documentation.
Signed-off-by: Alexander Potapenko <glider@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
---
v4:
Add Reviewed-by: Dmitry Vyukov
v3:
- s/check_kcov_mode/get_kcov_mode in objtool
v2:
- Address comments by Dmitry Vyukov:
- rename CONFIG_KCOV_ENABLE_GUARDS to CONFIG_KCOV_UNIQUE
- rename KCOV_MODE_TRACE_UNIQUE_PC to KCOV_MODE_UNIQUE_PC
- simplify index allocation
- update documentation and comments
- Address comments by Marco Elver:
- change _IOR to _IOW in KCOV_UNIQUE_ENABLE definition
- rename sanitizer_cov_write_subsequent() to kcov_append_to_buffer()
- Use __test_and_set_bit() to avoid the lock prefix on the bit operation
- Update code to match the new description of struct kcov_state
- Rename kcov_get_mode() to kcov_arg_to_mode() to avoid confusion with
get_kcov_mode(). Also make it use `enum kcov_mode`.
Change-Id: I9805e7b22619a50e05cc7c7d794dacf6f7de2f03
---
Documentation/dev-tools/kcov.rst | 43 ++++++++
include/linux/kcov.h | 2 +
include/linux/kcov_types.h | 8 ++
include/uapi/linux/kcov.h | 1 +
kernel/kcov.c | 164 ++++++++++++++++++++++++++-----
tools/objtool/check.c | 2 +-
6 files changed, 193 insertions(+), 27 deletions(-)
diff --git a/Documentation/dev-tools/kcov.rst b/Documentation/dev-tools/kcov.rst
index abf3ad2e784e8..6446887cd1c92 100644
--- a/Documentation/dev-tools/kcov.rst
+++ b/Documentation/dev-tools/kcov.rst
@@ -192,6 +192,49 @@ Normally the shared buffer is used as follows::
up to the buffer[0] value saved above |
+Unique coverage collection
+---------------------------
+
+Instead of collecting a trace of PCs, KCOV can deduplicate them on the fly.
+This mode is enabled by the ``KCOV_UNIQUE_ENABLE`` ioctl (only available if
+``CONFIG_KCOV_UNIQUE`` is on).
+
+.. code-block:: c
+
+ /* Same includes and defines as above. */
+ #define KCOV_UNIQUE_ENABLE _IOW('c', 103, unsigned long)
+ #define BITMAP_SIZE (4<<10)
+
+ /* Instead of KCOV_ENABLE, enable unique coverage collection. */
+ if (ioctl(fd, KCOV_UNIQUE_ENABLE, BITMAP_SIZE))
+ perror("ioctl"), exit(1);
+ /* Reset the coverage from the tail of the ioctl() call. */
+ __atomic_store_n(&cover[BITMAP_SIZE], 0, __ATOMIC_RELAXED);
+ memset(cover, 0, BITMAP_SIZE * sizeof(unsigned long));
+
+ /* Call the target syscall call. */
+ /* ... */
+
+ /* Read the number of collected PCs. */
+ n = __atomic_load_n(&cover[BITMAP_SIZE], __ATOMIC_RELAXED);
+ /* Disable the coverage collection. */
+ if (ioctl(fd, KCOV_DISABLE, 0))
+ perror("ioctl"), exit(1);
+
+Calling ``ioctl(fd, KCOV_UNIQUE_ENABLE, bitmap_size)`` carves out ``bitmap_size``
+unsigned long's from those allocated by ``KCOV_INIT_TRACE`` to keep an opaque
+bitmap that prevents the kernel from storing the same PC twice. The remaining
+part of the buffer is used to collect PCs, like in other modes (this part must
+contain at least two unsigned long's, like when collecting non-unique PCs).
+
+The mapping between a PC and its position in the bitmap is persistent during the
+kernel lifetime, so it is possible for the callers to directly use the bitmap
+contents as a coverage signal (like when fuzzing userspace with AFL).
+
+In order to reset the coverage between the runs, the user needs to rewind the
+trace (by writing 0 into the first buffer element past ``bitmap_size``) and zero
+the whole bitmap.
+
Comparison operands collection
------------------------------
diff --git a/include/linux/kcov.h b/include/linux/kcov.h
index 2acccfa5ae9af..cea2e62723ef9 100644
--- a/include/linux/kcov.h
+++ b/include/linux/kcov.h
@@ -10,6 +10,7 @@ struct task_struct;
#ifdef CONFIG_KCOV
enum kcov_mode {
+ KCOV_MODE_INVALID = -1,
/* Coverage collection is not enabled yet. */
KCOV_MODE_DISABLED = 0,
/* KCOV was initialized, but tracing mode hasn't been chosen yet. */
@@ -23,6 +24,7 @@ enum kcov_mode {
KCOV_MODE_TRACE_CMP = 3,
/* The process owns a KCOV remote reference. */
KCOV_MODE_REMOTE = 4,
+ KCOV_MODE_UNIQUE_PC = 5,
};
#define KCOV_IN_CTXSW (1 << 30)
diff --git a/include/linux/kcov_types.h b/include/linux/kcov_types.h
index 9d38a2020b099..8be930f47cd78 100644
--- a/include/linux/kcov_types.h
+++ b/include/linux/kcov_types.h
@@ -18,6 +18,14 @@ struct kcov_state {
/* Buffer for coverage collection, shared with the userspace. */
unsigned long *trace;
+ /* Size of the bitmap (in bits). */
+ unsigned int bitmap_size;
+ /*
+ * Bitmap for coverage deduplication, shared with the
+ * userspace.
+ */
+ unsigned long *bitmap;
+
/*
* KCOV sequence number: incremented each time kcov is reenabled, used
* by kcov_remote_stop(), see the comment there.
diff --git a/include/uapi/linux/kcov.h b/include/uapi/linux/kcov.h
index ed95dba9fa37e..e743ee011eeca 100644
--- a/include/uapi/linux/kcov.h
+++ b/include/uapi/linux/kcov.h
@@ -22,6 +22,7 @@ struct kcov_remote_arg {
#define KCOV_ENABLE _IO('c', 100)
#define KCOV_DISABLE _IO('c', 101)
#define KCOV_REMOTE_ENABLE _IOW('c', 102, struct kcov_remote_arg)
+#define KCOV_UNIQUE_ENABLE _IOW('c', 103, unsigned long)
enum {
/*
diff --git a/kernel/kcov.c b/kernel/kcov.c
index 2005fc7f578ee..a92c848d17bce 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -28,6 +28,10 @@
#include <linux/log2.h>
#include <asm/setup.h>
+#ifdef CONFIG_KCOV_UNIQUE
+atomic_t kcov_guard_max_index = ATOMIC_INIT(0);
+#endif
+
#define kcov_debug(fmt, ...) pr_debug("%s: " fmt, __func__, ##__VA_ARGS__)
/* Number of 64-bit words written per one comparison: */
@@ -163,9 +167,9 @@ static __always_inline bool in_softirq_really(void)
return in_serving_softirq() && !in_hardirq() && !in_nmi();
}
-static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)
+static notrace enum kcov_mode get_kcov_mode(struct task_struct *t)
{
- unsigned int mode;
+ enum kcov_mode mode;
/*
* We are interested in code coverage as a function of a syscall inputs,
@@ -173,7 +177,7 @@ static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_stru
* coverage collection section in a softirq.
*/
if (!in_task() && !(in_softirq_really() && t->kcov_softirq))
- return false;
+ return KCOV_MODE_INVALID;
mode = READ_ONCE(t->kcov_mode);
/*
* There is some code that runs in interrupts but for which
@@ -183,7 +187,7 @@ static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_stru
* kcov_start().
*/
barrier();
- return mode == needed_mode;
+ return mode;
}
static notrace unsigned long canonicalize_ip(unsigned long ip)
@@ -202,7 +206,7 @@ static notrace void kcov_append_to_buffer(unsigned long *trace, int size,
if (likely(pos < size)) {
/*
- * Some early interrupt code could bypass check_kcov_mode() check
+ * Some early interrupt code could bypass get_kcov_mode() check
* and invoke __sanitizer_cov_trace_pc(). If such interrupt is
* raised between writing pc and updating pos, the pc could be
* overitten by the recursive __sanitizer_cov_trace_pc().
@@ -219,14 +223,76 @@ static notrace void kcov_append_to_buffer(unsigned long *trace, int size,
* This is called once per basic-block/edge.
*/
#ifdef CONFIG_KCOV_UNIQUE
+DEFINE_PER_CPU(u32, saved_index);
+/*
+ * Assign an index to a guard variable that does not have one yet.
+ * For an unlikely case of a race with another task executing the same basic
+ * block for the first time with kcov enabled, we store the unused index in a
+ * per-cpu variable.
+ * In an even less likely case of the current task losing the race and getting
+ * rescheduled onto a CPU that already has a saved index, the index is
+ * discarded. This will result in an unused hole in the bitmap, but such events
+ * should have minor impact on the overall memory consumption.
+ */
+static __always_inline u32 init_pc_guard(u32 *guard)
+{
+ /* If the current CPU has a saved free index, use it. */
+ u32 index = this_cpu_xchg(saved_index, 0);
+ u32 old_guard;
+
+ if (likely(!index))
+ /*
+ * Allocate a new index. No overflow is possible, because 2**32
+ * unique basic blocks will take more space than the max size
+ * of the kernel text segment.
+ */
+ index = atomic_inc_return(&kcov_guard_max_index);
+
+ /*
+ * Make sure another task is not initializing the same guard
+ * concurrently.
+ */
+ old_guard = cmpxchg(guard, 0, index);
+ if (unlikely(old_guard)) {
+ /* We lost the race, save the index for future use. */
+ this_cpu_write(saved_index, index);
+ return old_guard;
+ }
+ return index;
+}
+
void notrace __sanitizer_cov_trace_pc_guard(u32 *guard)
{
- if (!check_kcov_mode(KCOV_MODE_TRACE_PC, current))
- return;
+ enum kcov_mode mode = get_kcov_mode(current);
+ u32 pc_index;
- kcov_append_to_buffer(current->kcov_state.trace,
- current->kcov_state.trace_size,
- canonicalize_ip(_RET_IP_));
+ switch (mode) {
+ case KCOV_MODE_UNIQUE_PC:
+ pc_index = READ_ONCE(*guard);
+ if (unlikely(!pc_index))
+ pc_index = init_pc_guard(guard);
+
+ /*
+ * Use the bitmap for coverage deduplication. We assume both
+ * s.bitmap and s.trace are non-NULL.
+ */
+ if (likely(pc_index < current->kcov_state.bitmap_size))
+ if (__test_and_set_bit(pc_index,
+ current->kcov_state.bitmap))
+ return;
+ /*
+ * If the PC is new, or the bitmap is too small, write PC to the
+ * trace.
+ */
+ fallthrough;
+ case KCOV_MODE_TRACE_PC:
+ kcov_append_to_buffer(current->kcov_state.trace,
+ current->kcov_state.trace_size,
+ canonicalize_ip(_RET_IP_));
+ break;
+ default:
+ return;
+ }
}
EXPORT_SYMBOL(__sanitizer_cov_trace_pc_guard);
@@ -238,7 +304,7 @@ EXPORT_SYMBOL(__sanitizer_cov_trace_pc_guard_init);
#else /* !CONFIG_KCOV_UNIQUE */
void notrace __sanitizer_cov_trace_pc(void)
{
- if (!check_kcov_mode(KCOV_MODE_TRACE_PC, current))
+ if (get_kcov_mode(current) != KCOV_MODE_TRACE_PC)
return;
kcov_append_to_buffer(current->kcov_state.trace,
@@ -256,7 +322,7 @@ static void notrace write_comp_data(u64 type, u64 arg1, u64 arg2, u64 ip)
u64 *trace;
t = current;
- if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
+ if (get_kcov_mode(t) != KCOV_MODE_TRACE_CMP)
return;
ip = canonicalize_ip(ip);
@@ -374,7 +440,7 @@ static void kcov_start(struct task_struct *t, struct kcov *kcov,
t->kcov = kcov;
/* Cache in task struct for performance. */
t->kcov_state = *state;
- /* See comment in check_kcov_mode(). */
+ /* See comment in get_kcov_mode(). */
barrier();
WRITE_ONCE(t->kcov_mode, mode);
}
@@ -409,6 +475,10 @@ static void kcov_reset(struct kcov *kcov)
kcov->mode = KCOV_MODE_INIT;
kcov->remote = false;
kcov->remote_size = 0;
+ kcov->state.trace = kcov->state.area;
+ kcov->state.trace_size = kcov->state.size;
+ kcov->state.bitmap = NULL;
+ kcov->state.bitmap_size = 0;
kcov->state.sequence++;
}
@@ -549,18 +619,23 @@ static int kcov_close(struct inode *inode, struct file *filep)
return 0;
}
-static int kcov_get_mode(unsigned long arg)
+static enum kcov_mode kcov_arg_to_mode(unsigned long arg, int *error)
{
- if (arg == KCOV_TRACE_PC)
+ if (arg == KCOV_TRACE_PC) {
return KCOV_MODE_TRACE_PC;
- else if (arg == KCOV_TRACE_CMP)
+ } else if (arg == KCOV_TRACE_CMP) {
#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
return KCOV_MODE_TRACE_CMP;
#else
- return -ENOTSUPP;
+ if (error)
+ *error = -ENOTSUPP;
+ return KCOV_MODE_INVALID;
#endif
- else
- return -EINVAL;
+ } else {
+ if (error)
+ *error = -EINVAL;
+ return KCOV_MODE_INVALID;
+ }
}
/*
@@ -595,12 +670,47 @@ static inline bool kcov_check_handle(u64 handle, bool common_valid,
return false;
}
+static long kcov_handle_unique_enable(struct kcov *kcov,
+ unsigned long bitmap_words)
+{
+ struct task_struct *t = current;
+
+ if (!IS_ENABLED(CONFIG_KCOV_UNIQUE))
+ return -ENOTSUPP;
+ if (kcov->mode != KCOV_MODE_INIT || !kcov->state.area)
+ return -EINVAL;
+ if (kcov->t != NULL || t->kcov != NULL)
+ return -EBUSY;
+
+ /*
+ * Cannot use zero-sized bitmap, also the bitmap must leave at least two
+ * words for the trace.
+ */
+ if ((!bitmap_words) || (bitmap_words >= (kcov->state.size - 1)))
+ return -EINVAL;
+
+ kcov->state.bitmap_size = bitmap_words * sizeof(unsigned long) * 8;
+ kcov->state.bitmap = kcov->state.area;
+ kcov->state.trace_size = kcov->state.size - bitmap_words;
+ kcov->state.trace = ((unsigned long *)kcov->state.area + bitmap_words);
+
+ kcov_fault_in_area(kcov);
+ kcov->mode = KCOV_MODE_UNIQUE_PC;
+ kcov_start(t, kcov, kcov->mode, &kcov->state);
+ kcov->t = t;
+ /* Put either in kcov_task_exit() or in KCOV_DISABLE. */
+ kcov_get(kcov);
+
+ return 0;
+}
+
static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
unsigned long arg)
{
struct task_struct *t;
unsigned long flags, unused;
- int mode, i;
+ enum kcov_mode mode;
+ int error = 0, i;
struct kcov_remote_arg *remote_arg;
struct kcov_remote *remote;
@@ -618,9 +728,9 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
t = current;
if (kcov->t != NULL || t->kcov != NULL)
return -EBUSY;
- mode = kcov_get_mode(arg);
- if (mode < 0)
- return mode;
+ mode = kcov_arg_to_mode(arg, &error);
+ if (mode == KCOV_MODE_INVALID)
+ return error;
kcov_fault_in_area(kcov);
kcov->mode = mode;
kcov_start(t, kcov, mode, &kcov->state);
@@ -628,6 +738,8 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
/* Put either in kcov_task_exit() or in KCOV_DISABLE. */
kcov_get(kcov);
return 0;
+ case KCOV_UNIQUE_ENABLE:
+ return kcov_handle_unique_enable(kcov, arg);
case KCOV_DISABLE:
/* Disable coverage for the current task. */
unused = arg;
@@ -646,9 +758,9 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
if (kcov->t != NULL || t->kcov != NULL)
return -EBUSY;
remote_arg = (struct kcov_remote_arg *)arg;
- mode = kcov_get_mode(remote_arg->trace_mode);
- if (mode < 0)
- return mode;
+ mode = kcov_arg_to_mode(remote_arg->trace_mode, &error);
+ if (mode == KCOV_MODE_INVALID)
+ return error;
if ((unsigned long)remote_arg->area_size >
LONG_MAX / sizeof(unsigned long))
return -EINVAL;
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 60eb5faa27d28..f4ec041de0224 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1154,7 +1154,7 @@ static const char *uaccess_safe_builtin[] = {
"__tsan_unaligned_write16",
/* KCOV */
"write_comp_data",
- "check_kcov_mode",
+ "get_kcov_mode",
"__sanitizer_cov_trace_pc",
"__sanitizer_cov_trace_pc_guard",
"__sanitizer_cov_trace_const_cmp1",
--
2.50.1.552.g942d659e1b-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 08/10] kcov: add ioctl(KCOV_RESET_TRACE)
2025-07-31 11:51 [PATCH v4 00/10] Coverage deduplication for KCOV Alexander Potapenko
` (6 preceding siblings ...)
2025-07-31 11:51 ` [PATCH v4 07/10] kcov: add ioctl(KCOV_UNIQUE_ENABLE) Alexander Potapenko
@ 2025-07-31 11:51 ` Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 09/10] kcov: selftests: add kcov_test Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 10/10] kcov: use enum kcov_mode in kcov_mode_enabled() Alexander Potapenko
9 siblings, 0 replies; 13+ messages in thread
From: Alexander Potapenko @ 2025-07-31 11:51 UTC (permalink / raw)
To: glider
Cc: quic_jiangenj, linux-kernel, kasan-dev, Dmitry Vyukov,
Aleksandr Nogikh, Andrey Konovalov, Borislav Petkov, Dave Hansen,
Ingo Molnar, Josh Poimboeuf, Marco Elver, Peter Zijlstra,
Thomas Gleixner
Provide a mechanism to reset the coverage for the current task
without writing directly to the coverage buffer.
This is slower, but allows the fuzzers to map the coverage buffer
as read-only, making it harder to corrupt.
Signed-off-by: Alexander Potapenko <glider@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
---
v4:
- Add Reviewed-by: Dmitry Vyukov
v2:
- Update code to match the new description of struct kcov_state
Change-Id: I8f9e6c179d93ccbfe0296b14764e88fa837cfffe
---
Documentation/dev-tools/kcov.rst | 26 ++++++++++++++++++++++++++
include/uapi/linux/kcov.h | 1 +
kernel/kcov.c | 15 +++++++++++++++
3 files changed, 42 insertions(+)
diff --git a/Documentation/dev-tools/kcov.rst b/Documentation/dev-tools/kcov.rst
index 6446887cd1c92..e215c0651e16d 100644
--- a/Documentation/dev-tools/kcov.rst
+++ b/Documentation/dev-tools/kcov.rst
@@ -470,3 +470,29 @@ local tasks spawned by the process and the global task that handles USB bus #1:
perror("close"), exit(1);
return 0;
}
+
+
+Resetting coverage with an KCOV_RESET_TRACE
+-------------------------------------------
+
+The ``KCOV_RESET_TRACE`` ioctl provides a mechanism to clear collected coverage
+data for the current task. It resets the program counter (PC) trace and, if
+``KCOV_UNIQUE_ENABLE`` mode is active, also zeroes the associated bitmap.
+
+The primary use case for this ioctl is to enhance safety during fuzzing.
+Normally, a user could map the kcov buffer with ``PROT_READ | PROT_WRITE`` and
+reset the trace from the user-space program. However, when fuzzing system calls,
+the kernel itself might inadvertently write to this shared buffer, corrupting
+the coverage data.
+
+To prevent this, a fuzzer can map the buffer with ``PROT_READ`` and use
+``ioctl(fd, KCOV_RESET_TRACE, 0)`` to safely clear the buffer from the kernel
+side before each fuzzing iteration.
+
+Note that:
+
+* This ioctl is safer but slower than directly writing to the shared memory
+ buffer due to the overhead of a system call.
+* ``KCOV_RESET_TRACE`` is itself a system call, and its execution will be traced
+ by kcov. Consequently, immediately after the ioctl returns, cover[0] will be
+ greater than 0.
diff --git a/include/uapi/linux/kcov.h b/include/uapi/linux/kcov.h
index e743ee011eeca..8ab77cc3afa76 100644
--- a/include/uapi/linux/kcov.h
+++ b/include/uapi/linux/kcov.h
@@ -23,6 +23,7 @@ struct kcov_remote_arg {
#define KCOV_DISABLE _IO('c', 101)
#define KCOV_REMOTE_ENABLE _IOW('c', 102, struct kcov_remote_arg)
#define KCOV_UNIQUE_ENABLE _IOW('c', 103, unsigned long)
+#define KCOV_RESET_TRACE _IO('c', 104)
enum {
/*
diff --git a/kernel/kcov.c b/kernel/kcov.c
index a92c848d17bce..82ed4c6150c54 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -740,6 +740,21 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
return 0;
case KCOV_UNIQUE_ENABLE:
return kcov_handle_unique_enable(kcov, arg);
+ case KCOV_RESET_TRACE:
+ unused = arg;
+ if (unused != 0 || current->kcov != kcov)
+ return -EINVAL;
+ t = current;
+ if (WARN_ON(kcov->t != t))
+ return -EINVAL;
+ mode = kcov->mode;
+ if (mode < KCOV_MODE_TRACE_PC)
+ return -EINVAL;
+ if (kcov->state.bitmap)
+ bitmap_zero(kcov->state.bitmap,
+ kcov->state.bitmap_size);
+ WRITE_ONCE(kcov->state.trace[0], 0);
+ return 0;
case KCOV_DISABLE:
/* Disable coverage for the current task. */
unused = arg;
--
2.50.1.552.g942d659e1b-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 09/10] kcov: selftests: add kcov_test
2025-07-31 11:51 [PATCH v4 00/10] Coverage deduplication for KCOV Alexander Potapenko
` (7 preceding siblings ...)
2025-07-31 11:51 ` [PATCH v4 08/10] kcov: add ioctl(KCOV_RESET_TRACE) Alexander Potapenko
@ 2025-07-31 11:51 ` Alexander Potapenko
2025-08-01 4:10 ` Dmitry Vyukov
2025-07-31 11:51 ` [PATCH v4 10/10] kcov: use enum kcov_mode in kcov_mode_enabled() Alexander Potapenko
9 siblings, 1 reply; 13+ messages in thread
From: Alexander Potapenko @ 2025-07-31 11:51 UTC (permalink / raw)
To: glider
Cc: quic_jiangenj, linux-kernel, kasan-dev, Aleksandr Nogikh,
Andrey Konovalov, Borislav Petkov, Dave Hansen, Dmitry Vyukov,
Ingo Molnar, Josh Poimboeuf, Marco Elver, Peter Zijlstra,
Thomas Gleixner
Implement test fixtures for testing different combinations of coverage
collection modes:
- unique and non-unique coverage;
- collecting PCs and comparison arguments;
- mapping the buffer as RO and RW.
To build:
$ make -C tools/testing/selftests/kcov kcov_test
Signed-off-by: Alexander Potapenko <glider@google.com>
---
v4:
- Per Dmitry Vyukov's request, add CONFIG_KCOV_UNIQUE=y to the
list of required configs
v3:
- Address comments by Dmitry Vyukov:
- add tools/testing/selftests/kcov/config
- add ifdefs to KCOV_UNIQUE_ENABLE and KCOV_RESET_TRACE
- Properly handle/reset the coverage buffer when collecting unique
coverage
Change-Id: I0793f1b91685873c77bcb222a03f64321244df8f
---
MAINTAINERS | 1 +
tools/testing/selftests/kcov/Makefile | 6 +
tools/testing/selftests/kcov/config | 2 +
tools/testing/selftests/kcov/kcov_test.c | 401 +++++++++++++++++++++++
4 files changed, 410 insertions(+)
create mode 100644 tools/testing/selftests/kcov/Makefile
create mode 100644 tools/testing/selftests/kcov/config
create mode 100644 tools/testing/selftests/kcov/kcov_test.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 6906eb9d88dae..c1d64cef693b9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13018,6 +13018,7 @@ F: include/linux/kcov_types.h
F: include/uapi/linux/kcov.h
F: kernel/kcov.c
F: scripts/Makefile.kcov
+F: tools/testing/selftests/kcov/
KCSAN
M: Marco Elver <elver@google.com>
diff --git a/tools/testing/selftests/kcov/Makefile b/tools/testing/selftests/kcov/Makefile
new file mode 100644
index 0000000000000..08abf8b60bcf9
--- /dev/null
+++ b/tools/testing/selftests/kcov/Makefile
@@ -0,0 +1,6 @@
+# SPDX-License-Identifier: GPL-2.0-only
+LDFLAGS += -static
+
+TEST_GEN_PROGS := kcov_test
+
+include ../lib.mk
diff --git a/tools/testing/selftests/kcov/config b/tools/testing/selftests/kcov/config
new file mode 100644
index 0000000000000..ba0c1a0bc8bf2
--- /dev/null
+++ b/tools/testing/selftests/kcov/config
@@ -0,0 +1,2 @@
+CONFIG_KCOV=y
+CONFIG_KCOV_UNIQUE=y
diff --git a/tools/testing/selftests/kcov/kcov_test.c b/tools/testing/selftests/kcov/kcov_test.c
new file mode 100644
index 0000000000000..daf12aeb374b5
--- /dev/null
+++ b/tools/testing/selftests/kcov/kcov_test.c
@@ -0,0 +1,401 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Test the kernel coverage (/sys/kernel/debug/kcov).
+ *
+ * Copyright 2025 Google LLC.
+ */
+#include <fcntl.h>
+#include <linux/kcov.h>
+#include <stdint.h>
+#include <stddef.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+#include <sys/types.h>
+#include <unistd.h>
+
+#include "../kselftest_harness.h"
+
+/* Normally these defines should be provided by linux/kcov.h, but they aren't there yet. */
+#ifndef KCOV_UNIQUE_ENABLE
+#define KCOV_UNIQUE_ENABLE _IOW('c', 103, unsigned long)
+#endif
+#ifndef KCOV_RESET_TRACE
+#define KCOV_RESET_TRACE _IO('c', 104)
+#endif
+
+#define COVER_SIZE (64 << 10)
+#define BITMAP_SIZE (4 << 10)
+
+#define DEBUG_COVER_PCS 0
+
+FIXTURE(kcov)
+{
+ int fd;
+ unsigned long *mapping;
+ size_t mapping_size;
+};
+
+FIXTURE_VARIANT(kcov)
+{
+ int mode;
+ bool fast_reset;
+ bool map_readonly;
+};
+
+/* clang-format off */
+FIXTURE_VARIANT_ADD(kcov, mode_trace_pc)
+{
+ /* clang-format on */
+ .mode = KCOV_TRACE_PC,
+ .fast_reset = true,
+ .map_readonly = false,
+};
+
+/* clang-format off */
+FIXTURE_VARIANT_ADD(kcov, mode_trace_cmp)
+{
+ /* clang-format on */
+ .mode = KCOV_TRACE_CMP,
+ .fast_reset = true,
+ .map_readonly = false,
+};
+
+/* clang-format off */
+FIXTURE_VARIANT_ADD(kcov, reset_ioctl_rw)
+{
+ /* clang-format on */
+ .mode = KCOV_TRACE_PC,
+ .fast_reset = false,
+ .map_readonly = false,
+};
+
+FIXTURE_VARIANT_ADD(kcov, reset_ioctl_ro)
+/* clang-format off */
+{
+ /* clang-format on */
+ .mode = KCOV_TRACE_PC,
+ .fast_reset = false,
+ .map_readonly = true,
+};
+
+int kcov_open_init(struct __test_metadata *_metadata, unsigned long size,
+ int prot, unsigned long **out_mapping)
+{
+ unsigned long *mapping;
+
+ /* A single fd descriptor allows coverage collection on a single thread. */
+ int fd = open("/sys/kernel/debug/kcov", O_RDWR);
+
+ ASSERT_NE(fd, -1)
+ {
+ perror("open");
+ }
+
+ EXPECT_EQ(ioctl(fd, KCOV_INIT_TRACE, size), 0)
+ {
+ perror("ioctl KCOV_INIT_TRACE");
+ close(fd);
+ }
+
+ /* Mmap buffer shared between kernel- and user-space. */
+ mapping = (unsigned long *)mmap(NULL, size * sizeof(unsigned long),
+ prot, MAP_SHARED, fd, 0);
+ ASSERT_NE((void *)mapping, MAP_FAILED)
+ {
+ perror("mmap");
+ close(fd);
+ }
+ *out_mapping = mapping;
+
+ return fd;
+}
+
+FIXTURE_SETUP(kcov)
+{
+ int prot = variant->map_readonly ? PROT_READ : (PROT_READ | PROT_WRITE);
+
+ /* Read-only mapping is incompatible with fast reset. */
+ ASSERT_FALSE(variant->map_readonly && variant->fast_reset);
+
+ self->mapping_size = COVER_SIZE;
+ self->fd = kcov_open_init(_metadata, self->mapping_size, prot,
+ &(self->mapping));
+
+ /* Enable coverage collection on the current thread. */
+ EXPECT_EQ(ioctl(self->fd, KCOV_ENABLE, variant->mode), 0)
+ {
+ perror("ioctl KCOV_ENABLE");
+ /* Cleanup will be handled by FIXTURE_TEARDOWN. */
+ return;
+ }
+}
+
+void kcov_uninit_close(struct __test_metadata *_metadata, int fd,
+ unsigned long *mapping, size_t size)
+{
+ /* Disable coverage collection for the current thread. */
+ EXPECT_EQ(ioctl(fd, KCOV_DISABLE, 0), 0)
+ {
+ perror("ioctl KCOV_DISABLE");
+ }
+
+ /* Free resources. */
+ EXPECT_EQ(munmap(mapping, size * sizeof(unsigned long)), 0)
+ {
+ perror("munmap");
+ }
+
+ EXPECT_EQ(close(fd), 0)
+ {
+ perror("close");
+ }
+}
+
+FIXTURE_TEARDOWN(kcov)
+{
+ kcov_uninit_close(_metadata, self->fd, self->mapping,
+ self->mapping_size);
+}
+
+void dump_collected_pcs(struct __test_metadata *_metadata, unsigned long *cover,
+ size_t start, size_t end)
+{
+ int i = 0;
+
+ TH_LOG("Collected %lu PCs", end - start);
+#if DEBUG_COVER_PCS
+ for (i = start; i < end; i++)
+ TH_LOG("0x%lx", cover[i + 1]);
+#endif
+}
+
+/* Coverage collection helper without assertions. */
+unsigned long collect_coverage_unchecked(struct __test_metadata *_metadata,
+ unsigned long *cover, bool dump)
+{
+ unsigned long before, after;
+
+ before = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
+ /*
+ * Call the target syscall call. Here we use read(-1, NULL, 0) as an example.
+ * This will likely return an error (-EFAULT or -EBADF), but the goal is to
+ * collect coverage for the syscall's entry/exit paths.
+ */
+ read(-1, NULL, 0);
+
+ after = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
+
+ if (dump)
+ dump_collected_pcs(_metadata, cover, before, after);
+ return after - before;
+}
+
+unsigned long collect_coverage_once(struct __test_metadata *_metadata,
+ unsigned long *cover)
+{
+ unsigned long collected =
+ collect_coverage_unchecked(_metadata, cover, /*dump*/ true);
+
+ /* Coverage must be non-zero. */
+ EXPECT_GT(collected, 0);
+ return collected;
+}
+
+void reset_coverage(struct __test_metadata *_metadata, bool fast, int fd,
+ unsigned long *mapping)
+{
+ unsigned long count;
+
+ if (fast) {
+ __atomic_store_n(&mapping[0], 0, __ATOMIC_RELAXED);
+ } else {
+ EXPECT_EQ(ioctl(fd, KCOV_RESET_TRACE, 0), 0)
+ {
+ perror("ioctl KCOV_RESET_TRACE");
+ }
+ count = __atomic_load_n(&mapping[0], __ATOMIC_RELAXED);
+ EXPECT_NE(count, 0);
+ }
+}
+
+TEST_F(kcov, kcov_basic_syscall_coverage)
+{
+ unsigned long first, second, before, after, i;
+
+ /* Reset coverage that may be left over from the fixture setup. */
+ reset_coverage(_metadata, variant->fast_reset, self->fd, self->mapping);
+
+ /* Collect the coverage for a single syscall two times in a row. */
+ first = collect_coverage_once(_metadata, self->mapping);
+ second = collect_coverage_once(_metadata, self->mapping);
+ /* Collected coverage should not differ too much. */
+ EXPECT_GT(first * 10, second);
+ EXPECT_GT(second * 10, first);
+
+ /* Now reset the buffer and collect the coverage again. */
+ reset_coverage(_metadata, variant->fast_reset, self->fd, self->mapping);
+ collect_coverage_once(_metadata, self->mapping);
+
+ /* Now try many times to fill up the buffer. */
+ reset_coverage(_metadata, variant->fast_reset, self->fd, self->mapping);
+ while (collect_coverage_unchecked(_metadata, self->mapping,
+ /*dump*/ false)) {
+ /* Do nothing. */
+ }
+ before = __atomic_load_n(&(self->mapping[0]), __ATOMIC_RELAXED);
+ /*
+ * Resetting with ioctl may still generate some coverage, but much less
+ * than there was before.
+ */
+ reset_coverage(_metadata, variant->fast_reset, self->fd, self->mapping);
+ after = __atomic_load_n(&(self->mapping[0]), __ATOMIC_RELAXED);
+ EXPECT_GT(before, after);
+ /* Collecting coverage after reset will now succeed. */
+ collect_coverage_once(_metadata, self->mapping);
+}
+
+FIXTURE(kcov_uniq)
+{
+ int fd;
+ unsigned long *mapping;
+ size_t mapping_size;
+ unsigned long *bitmap;
+ size_t bitmap_size;
+ unsigned long *cover;
+ size_t cover_size;
+};
+
+FIXTURE_VARIANT(kcov_uniq)
+{
+ bool fast_reset;
+ bool map_readonly;
+};
+
+/* clang-format off */
+FIXTURE_VARIANT_ADD(kcov_uniq, fast_rw)
+{
+ /* clang-format on */
+ .fast_reset = true,
+ .map_readonly = false,
+};
+
+/* clang-format off */
+FIXTURE_VARIANT_ADD(kcov_uniq, slow_rw)
+{
+ /* clang-format on */
+ .fast_reset = false,
+ .map_readonly = false,
+};
+
+/* clang-format off */
+FIXTURE_VARIANT_ADD(kcov_uniq, slow_ro)
+{
+ /* clang-format on */
+ .fast_reset = false,
+ .map_readonly = true,
+};
+
+FIXTURE_SETUP(kcov_uniq)
+{
+ int prot = variant->map_readonly ? PROT_READ : (PROT_READ | PROT_WRITE);
+
+ /* Read-only mapping is incompatible with fast reset. */
+ ASSERT_FALSE(variant->map_readonly && variant->fast_reset);
+
+ self->mapping_size = COVER_SIZE;
+ self->fd = kcov_open_init(_metadata, self->mapping_size, prot,
+ &(self->mapping));
+
+ self->bitmap = self->mapping;
+ self->bitmap_size = BITMAP_SIZE;
+ /*
+ * Enable unique coverage collection on the current thread. Carve out
+ * self->bitmap_size unsigned long's for the bitmap.
+ */
+ EXPECT_EQ(ioctl(self->fd, KCOV_UNIQUE_ENABLE, self->bitmap_size), 0)
+ {
+ perror("ioctl KCOV_ENABLE");
+ /* Cleanup will be handled by FIXTURE_TEARDOWN. */
+ return;
+ }
+ self->cover = self->mapping + BITMAP_SIZE;
+ self->cover_size = self->mapping_size - BITMAP_SIZE;
+}
+
+FIXTURE_TEARDOWN(kcov_uniq)
+{
+ kcov_uninit_close(_metadata, self->fd, self->mapping,
+ self->mapping_size);
+}
+
+void reset_uniq_coverage(struct __test_metadata *_metadata, bool fast, int fd,
+ unsigned long *bitmap, unsigned long *cover)
+{
+ unsigned long count;
+
+ if (fast) {
+ /*
+ * Resetting the buffer for unique coverage collection requires
+ * zeroing out the bitmap and cover[0]. We are assuming that
+ * the coverage buffer immediately follows the bitmap, as they
+ * belong to the same memory mapping.
+ */
+ if (cover > bitmap)
+ memset(bitmap, 0, sizeof(unsigned long) * (cover - bitmap));
+ __atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
+ } else {
+ EXPECT_EQ(ioctl(fd, KCOV_RESET_TRACE, 0), 0)
+ {
+ perror("ioctl KCOV_RESET_TRACE");
+ }
+ count = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
+ EXPECT_NE(count, 0);
+ }
+}
+
+TEST_F(kcov_uniq, kcov_uniq_coverage)
+{
+ unsigned long first, second, before, after, i;
+
+ /* Reset coverage that may be left over from the fixture setup. */
+ reset_uniq_coverage(_metadata, variant->fast_reset, self->fd, self->bitmap, self->cover);
+
+ /*
+ * Collect the coverage for a single syscall two times in a row.
+ * Use collect_coverage_unchecked(), because it may return zero coverage.
+ */
+ first = collect_coverage_unchecked(_metadata, self->cover,
+ /*dump*/ true);
+ second = collect_coverage_unchecked(_metadata, self->cover,
+ /*dump*/ true);
+
+ /* Now reset the buffer and collect the coverage again. */
+ reset_uniq_coverage(_metadata, variant->fast_reset, self->fd, self->bitmap, self->cover);
+ collect_coverage_once(_metadata, self->cover);
+
+ /* Now try many times to saturate the unique coverage bitmap. */
+ reset_uniq_coverage(_metadata, variant->fast_reset, self->fd, self->bitmap, self->cover);
+ for (i = 0; i < 1000; i++)
+ collect_coverage_unchecked(_metadata, self->cover,
+ /*dump*/ false);
+
+ /* Another invocation of collect_coverage_unchecked() should not produce new coverage. */
+ EXPECT_EQ(collect_coverage_unchecked(_metadata, self->cover,
+ /*dump*/ false),
+ 0);
+
+ before = __atomic_load_n(&(self->cover[0]), __ATOMIC_RELAXED);
+ /*
+ * Resetting with ioctl may still generate some coverage, but much less
+ * than there was before.
+ */
+ reset_uniq_coverage(_metadata, variant->fast_reset, self->fd, self->bitmap, self->cover);
+ after = __atomic_load_n(&(self->cover[0]), __ATOMIC_RELAXED);
+ EXPECT_GT(before, after);
+ /* Collecting coverage after reset will now succeed. */
+ collect_coverage_once(_metadata, self->cover);
+}
+
+TEST_HARNESS_MAIN
--
2.50.1.552.g942d659e1b-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 10/10] kcov: use enum kcov_mode in kcov_mode_enabled()
2025-07-31 11:51 [PATCH v4 00/10] Coverage deduplication for KCOV Alexander Potapenko
` (8 preceding siblings ...)
2025-07-31 11:51 ` [PATCH v4 09/10] kcov: selftests: add kcov_test Alexander Potapenko
@ 2025-07-31 11:51 ` Alexander Potapenko
9 siblings, 0 replies; 13+ messages in thread
From: Alexander Potapenko @ 2025-07-31 11:51 UTC (permalink / raw)
To: glider
Cc: quic_jiangenj, linux-kernel, kasan-dev, Dmitry Vyukov,
Aleksandr Nogikh, Andrey Konovalov, Borislav Petkov, Dave Hansen,
Ingo Molnar, Josh Poimboeuf, Marco Elver, Peter Zijlstra,
Thomas Gleixner
Replace the remaining declarations of `unsigned int mode` with
`enum kcov_mode mode`. No functional change.
Signed-off-by: Alexander Potapenko <glider@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
---
v4:
- Add Reviewed-by: Dmitry Vyukov
Change-Id: I739b293c1f689cc99ef4adbe38bdac5813802efe
---
kernel/kcov.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/kcov.c b/kernel/kcov.c
index 82ed4c6150c54..6b7c21280fcd5 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -949,7 +949,7 @@ static const struct file_operations kcov_fops = {
* collecting coverage and copies all collected coverage into the kcov area.
*/
-static inline bool kcov_mode_enabled(unsigned int mode)
+static inline bool kcov_mode_enabled(enum kcov_mode mode)
{
return (mode & ~KCOV_IN_CTXSW) != KCOV_MODE_DISABLED;
}
@@ -957,7 +957,7 @@ static inline bool kcov_mode_enabled(unsigned int mode)
static void kcov_remote_softirq_start(struct task_struct *t)
{
struct kcov_percpu_data *data = this_cpu_ptr(&kcov_percpu_data);
- unsigned int mode;
+ enum kcov_mode mode;
mode = READ_ONCE(t->kcov_mode);
barrier();
@@ -1134,7 +1134,7 @@ void kcov_remote_stop(void)
{
struct task_struct *t = current;
struct kcov *kcov;
- unsigned int mode;
+ enum kcov_mode mode;
void *area;
unsigned int size;
int sequence;
--
2.50.1.552.g942d659e1b-goog
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v4 09/10] kcov: selftests: add kcov_test
2025-07-31 11:51 ` [PATCH v4 09/10] kcov: selftests: add kcov_test Alexander Potapenko
@ 2025-08-01 4:10 ` Dmitry Vyukov
0 siblings, 0 replies; 13+ messages in thread
From: Dmitry Vyukov @ 2025-08-01 4:10 UTC (permalink / raw)
To: Alexander Potapenko
Cc: quic_jiangenj, linux-kernel, kasan-dev, Aleksandr Nogikh,
Andrey Konovalov, Borislav Petkov, Dave Hansen, Ingo Molnar,
Josh Poimboeuf, Marco Elver, Peter Zijlstra, Thomas Gleixner
On Thu, 31 Jul 2025 at 13:52, Alexander Potapenko <glider@google.com> wrote:
>
> Implement test fixtures for testing different combinations of coverage
> collection modes:
> - unique and non-unique coverage;
> - collecting PCs and comparison arguments;
> - mapping the buffer as RO and RW.
>
> To build:
> $ make -C tools/testing/selftests/kcov kcov_test
>
> Signed-off-by: Alexander Potapenko <glider@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
> ---
> v4:
> - Per Dmitry Vyukov's request, add CONFIG_KCOV_UNIQUE=y to the
> list of required configs
> v3:
> - Address comments by Dmitry Vyukov:
> - add tools/testing/selftests/kcov/config
> - add ifdefs to KCOV_UNIQUE_ENABLE and KCOV_RESET_TRACE
> - Properly handle/reset the coverage buffer when collecting unique
> coverage
>
> Change-Id: I0793f1b91685873c77bcb222a03f64321244df8f
> ---
> MAINTAINERS | 1 +
> tools/testing/selftests/kcov/Makefile | 6 +
> tools/testing/selftests/kcov/config | 2 +
> tools/testing/selftests/kcov/kcov_test.c | 401 +++++++++++++++++++++++
> 4 files changed, 410 insertions(+)
> create mode 100644 tools/testing/selftests/kcov/Makefile
> create mode 100644 tools/testing/selftests/kcov/config
> create mode 100644 tools/testing/selftests/kcov/kcov_test.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 6906eb9d88dae..c1d64cef693b9 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -13018,6 +13018,7 @@ F: include/linux/kcov_types.h
> F: include/uapi/linux/kcov.h
> F: kernel/kcov.c
> F: scripts/Makefile.kcov
> +F: tools/testing/selftests/kcov/
>
> KCSAN
> M: Marco Elver <elver@google.com>
> diff --git a/tools/testing/selftests/kcov/Makefile b/tools/testing/selftests/kcov/Makefile
> new file mode 100644
> index 0000000000000..08abf8b60bcf9
> --- /dev/null
> +++ b/tools/testing/selftests/kcov/Makefile
> @@ -0,0 +1,6 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +LDFLAGS += -static
> +
> +TEST_GEN_PROGS := kcov_test
> +
> +include ../lib.mk
> diff --git a/tools/testing/selftests/kcov/config b/tools/testing/selftests/kcov/config
> new file mode 100644
> index 0000000000000..ba0c1a0bc8bf2
> --- /dev/null
> +++ b/tools/testing/selftests/kcov/config
> @@ -0,0 +1,2 @@
> +CONFIG_KCOV=y
> +CONFIG_KCOV_UNIQUE=y
> diff --git a/tools/testing/selftests/kcov/kcov_test.c b/tools/testing/selftests/kcov/kcov_test.c
> new file mode 100644
> index 0000000000000..daf12aeb374b5
> --- /dev/null
> +++ b/tools/testing/selftests/kcov/kcov_test.c
> @@ -0,0 +1,401 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Test the kernel coverage (/sys/kernel/debug/kcov).
> + *
> + * Copyright 2025 Google LLC.
> + */
> +#include <fcntl.h>
> +#include <linux/kcov.h>
> +#include <stdint.h>
> +#include <stddef.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <sys/ioctl.h>
> +#include <sys/mman.h>
> +#include <sys/types.h>
> +#include <unistd.h>
> +
> +#include "../kselftest_harness.h"
> +
> +/* Normally these defines should be provided by linux/kcov.h, but they aren't there yet. */
> +#ifndef KCOV_UNIQUE_ENABLE
> +#define KCOV_UNIQUE_ENABLE _IOW('c', 103, unsigned long)
> +#endif
> +#ifndef KCOV_RESET_TRACE
> +#define KCOV_RESET_TRACE _IO('c', 104)
> +#endif
> +
> +#define COVER_SIZE (64 << 10)
> +#define BITMAP_SIZE (4 << 10)
> +
> +#define DEBUG_COVER_PCS 0
> +
> +FIXTURE(kcov)
> +{
> + int fd;
> + unsigned long *mapping;
> + size_t mapping_size;
> +};
> +
> +FIXTURE_VARIANT(kcov)
> +{
> + int mode;
> + bool fast_reset;
> + bool map_readonly;
> +};
> +
> +/* clang-format off */
> +FIXTURE_VARIANT_ADD(kcov, mode_trace_pc)
> +{
> + /* clang-format on */
> + .mode = KCOV_TRACE_PC,
> + .fast_reset = true,
> + .map_readonly = false,
> +};
> +
> +/* clang-format off */
> +FIXTURE_VARIANT_ADD(kcov, mode_trace_cmp)
> +{
> + /* clang-format on */
> + .mode = KCOV_TRACE_CMP,
> + .fast_reset = true,
> + .map_readonly = false,
> +};
> +
> +/* clang-format off */
> +FIXTURE_VARIANT_ADD(kcov, reset_ioctl_rw)
> +{
> + /* clang-format on */
> + .mode = KCOV_TRACE_PC,
> + .fast_reset = false,
> + .map_readonly = false,
> +};
> +
> +FIXTURE_VARIANT_ADD(kcov, reset_ioctl_ro)
> +/* clang-format off */
> +{
> + /* clang-format on */
> + .mode = KCOV_TRACE_PC,
> + .fast_reset = false,
> + .map_readonly = true,
> +};
> +
> +int kcov_open_init(struct __test_metadata *_metadata, unsigned long size,
> + int prot, unsigned long **out_mapping)
> +{
> + unsigned long *mapping;
> +
> + /* A single fd descriptor allows coverage collection on a single thread. */
> + int fd = open("/sys/kernel/debug/kcov", O_RDWR);
> +
> + ASSERT_NE(fd, -1)
> + {
> + perror("open");
> + }
> +
> + EXPECT_EQ(ioctl(fd, KCOV_INIT_TRACE, size), 0)
> + {
> + perror("ioctl KCOV_INIT_TRACE");
> + close(fd);
> + }
> +
> + /* Mmap buffer shared between kernel- and user-space. */
> + mapping = (unsigned long *)mmap(NULL, size * sizeof(unsigned long),
> + prot, MAP_SHARED, fd, 0);
> + ASSERT_NE((void *)mapping, MAP_FAILED)
> + {
> + perror("mmap");
> + close(fd);
> + }
> + *out_mapping = mapping;
> +
> + return fd;
> +}
> +
> +FIXTURE_SETUP(kcov)
> +{
> + int prot = variant->map_readonly ? PROT_READ : (PROT_READ | PROT_WRITE);
> +
> + /* Read-only mapping is incompatible with fast reset. */
> + ASSERT_FALSE(variant->map_readonly && variant->fast_reset);
> +
> + self->mapping_size = COVER_SIZE;
> + self->fd = kcov_open_init(_metadata, self->mapping_size, prot,
> + &(self->mapping));
> +
> + /* Enable coverage collection on the current thread. */
> + EXPECT_EQ(ioctl(self->fd, KCOV_ENABLE, variant->mode), 0)
> + {
> + perror("ioctl KCOV_ENABLE");
> + /* Cleanup will be handled by FIXTURE_TEARDOWN. */
> + return;
> + }
> +}
> +
> +void kcov_uninit_close(struct __test_metadata *_metadata, int fd,
> + unsigned long *mapping, size_t size)
> +{
> + /* Disable coverage collection for the current thread. */
> + EXPECT_EQ(ioctl(fd, KCOV_DISABLE, 0), 0)
> + {
> + perror("ioctl KCOV_DISABLE");
> + }
> +
> + /* Free resources. */
> + EXPECT_EQ(munmap(mapping, size * sizeof(unsigned long)), 0)
> + {
> + perror("munmap");
> + }
> +
> + EXPECT_EQ(close(fd), 0)
> + {
> + perror("close");
> + }
> +}
> +
> +FIXTURE_TEARDOWN(kcov)
> +{
> + kcov_uninit_close(_metadata, self->fd, self->mapping,
> + self->mapping_size);
> +}
> +
> +void dump_collected_pcs(struct __test_metadata *_metadata, unsigned long *cover,
> + size_t start, size_t end)
> +{
> + int i = 0;
> +
> + TH_LOG("Collected %lu PCs", end - start);
> +#if DEBUG_COVER_PCS
> + for (i = start; i < end; i++)
> + TH_LOG("0x%lx", cover[i + 1]);
> +#endif
> +}
> +
> +/* Coverage collection helper without assertions. */
> +unsigned long collect_coverage_unchecked(struct __test_metadata *_metadata,
> + unsigned long *cover, bool dump)
> +{
> + unsigned long before, after;
> +
> + before = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
> + /*
> + * Call the target syscall call. Here we use read(-1, NULL, 0) as an example.
> + * This will likely return an error (-EFAULT or -EBADF), but the goal is to
> + * collect coverage for the syscall's entry/exit paths.
> + */
> + read(-1, NULL, 0);
> +
> + after = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
> +
> + if (dump)
> + dump_collected_pcs(_metadata, cover, before, after);
> + return after - before;
> +}
> +
> +unsigned long collect_coverage_once(struct __test_metadata *_metadata,
> + unsigned long *cover)
> +{
> + unsigned long collected =
> + collect_coverage_unchecked(_metadata, cover, /*dump*/ true);
> +
> + /* Coverage must be non-zero. */
> + EXPECT_GT(collected, 0);
> + return collected;
> +}
> +
> +void reset_coverage(struct __test_metadata *_metadata, bool fast, int fd,
> + unsigned long *mapping)
> +{
> + unsigned long count;
> +
> + if (fast) {
> + __atomic_store_n(&mapping[0], 0, __ATOMIC_RELAXED);
> + } else {
> + EXPECT_EQ(ioctl(fd, KCOV_RESET_TRACE, 0), 0)
> + {
> + perror("ioctl KCOV_RESET_TRACE");
> + }
> + count = __atomic_load_n(&mapping[0], __ATOMIC_RELAXED);
> + EXPECT_NE(count, 0);
> + }
> +}
> +
> +TEST_F(kcov, kcov_basic_syscall_coverage)
> +{
> + unsigned long first, second, before, after, i;
> +
> + /* Reset coverage that may be left over from the fixture setup. */
> + reset_coverage(_metadata, variant->fast_reset, self->fd, self->mapping);
> +
> + /* Collect the coverage for a single syscall two times in a row. */
> + first = collect_coverage_once(_metadata, self->mapping);
> + second = collect_coverage_once(_metadata, self->mapping);
> + /* Collected coverage should not differ too much. */
> + EXPECT_GT(first * 10, second);
> + EXPECT_GT(second * 10, first);
> +
> + /* Now reset the buffer and collect the coverage again. */
> + reset_coverage(_metadata, variant->fast_reset, self->fd, self->mapping);
> + collect_coverage_once(_metadata, self->mapping);
> +
> + /* Now try many times to fill up the buffer. */
> + reset_coverage(_metadata, variant->fast_reset, self->fd, self->mapping);
> + while (collect_coverage_unchecked(_metadata, self->mapping,
> + /*dump*/ false)) {
> + /* Do nothing. */
> + }
> + before = __atomic_load_n(&(self->mapping[0]), __ATOMIC_RELAXED);
> + /*
> + * Resetting with ioctl may still generate some coverage, but much less
> + * than there was before.
> + */
> + reset_coverage(_metadata, variant->fast_reset, self->fd, self->mapping);
> + after = __atomic_load_n(&(self->mapping[0]), __ATOMIC_RELAXED);
> + EXPECT_GT(before, after);
> + /* Collecting coverage after reset will now succeed. */
> + collect_coverage_once(_metadata, self->mapping);
> +}
> +
> +FIXTURE(kcov_uniq)
> +{
> + int fd;
> + unsigned long *mapping;
> + size_t mapping_size;
> + unsigned long *bitmap;
> + size_t bitmap_size;
> + unsigned long *cover;
> + size_t cover_size;
> +};
> +
> +FIXTURE_VARIANT(kcov_uniq)
> +{
> + bool fast_reset;
> + bool map_readonly;
> +};
> +
> +/* clang-format off */
> +FIXTURE_VARIANT_ADD(kcov_uniq, fast_rw)
> +{
> + /* clang-format on */
> + .fast_reset = true,
> + .map_readonly = false,
> +};
> +
> +/* clang-format off */
> +FIXTURE_VARIANT_ADD(kcov_uniq, slow_rw)
> +{
> + /* clang-format on */
> + .fast_reset = false,
> + .map_readonly = false,
> +};
> +
> +/* clang-format off */
> +FIXTURE_VARIANT_ADD(kcov_uniq, slow_ro)
> +{
> + /* clang-format on */
> + .fast_reset = false,
> + .map_readonly = true,
> +};
> +
> +FIXTURE_SETUP(kcov_uniq)
> +{
> + int prot = variant->map_readonly ? PROT_READ : (PROT_READ | PROT_WRITE);
> +
> + /* Read-only mapping is incompatible with fast reset. */
> + ASSERT_FALSE(variant->map_readonly && variant->fast_reset);
> +
> + self->mapping_size = COVER_SIZE;
> + self->fd = kcov_open_init(_metadata, self->mapping_size, prot,
> + &(self->mapping));
> +
> + self->bitmap = self->mapping;
> + self->bitmap_size = BITMAP_SIZE;
> + /*
> + * Enable unique coverage collection on the current thread. Carve out
> + * self->bitmap_size unsigned long's for the bitmap.
> + */
> + EXPECT_EQ(ioctl(self->fd, KCOV_UNIQUE_ENABLE, self->bitmap_size), 0)
> + {
> + perror("ioctl KCOV_ENABLE");
> + /* Cleanup will be handled by FIXTURE_TEARDOWN. */
> + return;
> + }
> + self->cover = self->mapping + BITMAP_SIZE;
> + self->cover_size = self->mapping_size - BITMAP_SIZE;
> +}
> +
> +FIXTURE_TEARDOWN(kcov_uniq)
> +{
> + kcov_uninit_close(_metadata, self->fd, self->mapping,
> + self->mapping_size);
> +}
> +
> +void reset_uniq_coverage(struct __test_metadata *_metadata, bool fast, int fd,
> + unsigned long *bitmap, unsigned long *cover)
> +{
> + unsigned long count;
> +
> + if (fast) {
> + /*
> + * Resetting the buffer for unique coverage collection requires
> + * zeroing out the bitmap and cover[0]. We are assuming that
> + * the coverage buffer immediately follows the bitmap, as they
> + * belong to the same memory mapping.
> + */
> + if (cover > bitmap)
> + memset(bitmap, 0, sizeof(unsigned long) * (cover - bitmap));
> + __atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
> + } else {
> + EXPECT_EQ(ioctl(fd, KCOV_RESET_TRACE, 0), 0)
> + {
> + perror("ioctl KCOV_RESET_TRACE");
> + }
> + count = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
> + EXPECT_NE(count, 0);
> + }
> +}
> +
> +TEST_F(kcov_uniq, kcov_uniq_coverage)
> +{
> + unsigned long first, second, before, after, i;
> +
> + /* Reset coverage that may be left over from the fixture setup. */
> + reset_uniq_coverage(_metadata, variant->fast_reset, self->fd, self->bitmap, self->cover);
> +
> + /*
> + * Collect the coverage for a single syscall two times in a row.
> + * Use collect_coverage_unchecked(), because it may return zero coverage.
> + */
> + first = collect_coverage_unchecked(_metadata, self->cover,
> + /*dump*/ true);
> + second = collect_coverage_unchecked(_metadata, self->cover,
> + /*dump*/ true);
> +
> + /* Now reset the buffer and collect the coverage again. */
> + reset_uniq_coverage(_metadata, variant->fast_reset, self->fd, self->bitmap, self->cover);
> + collect_coverage_once(_metadata, self->cover);
> +
> + /* Now try many times to saturate the unique coverage bitmap. */
> + reset_uniq_coverage(_metadata, variant->fast_reset, self->fd, self->bitmap, self->cover);
> + for (i = 0; i < 1000; i++)
> + collect_coverage_unchecked(_metadata, self->cover,
> + /*dump*/ false);
> +
> + /* Another invocation of collect_coverage_unchecked() should not produce new coverage. */
> + EXPECT_EQ(collect_coverage_unchecked(_metadata, self->cover,
> + /*dump*/ false),
> + 0);
> +
> + before = __atomic_load_n(&(self->cover[0]), __ATOMIC_RELAXED);
> + /*
> + * Resetting with ioctl may still generate some coverage, but much less
> + * than there was before.
> + */
> + reset_uniq_coverage(_metadata, variant->fast_reset, self->fd, self->bitmap, self->cover);
> + after = __atomic_load_n(&(self->cover[0]), __ATOMIC_RELAXED);
> + EXPECT_GT(before, after);
> + /* Collecting coverage after reset will now succeed. */
> + collect_coverage_once(_metadata, self->cover);
> +}
> +
> +TEST_HARNESS_MAIN
> --
> 2.50.1.552.g942d659e1b-goog
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v4 05/10] kcov: x86: introduce CONFIG_KCOV_UNIQUE
2025-07-31 11:51 ` [PATCH v4 05/10] kcov: x86: introduce CONFIG_KCOV_UNIQUE Alexander Potapenko
@ 2025-08-26 8:14 ` Joey Jiao
0 siblings, 0 replies; 13+ messages in thread
From: Joey Jiao @ 2025-08-26 8:14 UTC (permalink / raw)
To: Alexander Potapenko
Cc: linux-kernel, kasan-dev, x86, Dmitry Vyukov, Aleksandr Nogikh,
Andrey Konovalov, Borislav Petkov, Dave Hansen, Ingo Molnar,
Josh Poimboeuf, Marco Elver, Peter Zijlstra, Thomas Gleixner
On Thu, Jul 31, 2025 at 01:51:34PM +0200, Alexander Potapenko wrote:
> The new config switches coverage instrumentation to using
> __sanitizer_cov_trace_pc_guard(u32 *guard)
> instead of
> __sanitizer_cov_trace_pc(void)
>
> This relies on Clang's -fsanitize-coverage=trace-pc-guard flag [1].
>
> Each callback receives a unique 32-bit guard variable residing in .bss.
> Those guards can be used by kcov to deduplicate the coverage on the fly.
>
> As a first step, we make the new instrumentation mode 1:1 compatible
> with the old one.
>
> [1] https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-pcs-with-guards
>
> Cc: x86@kernel.org
> Signed-off-by: Alexander Potapenko <glider@google.com>
> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
> ---
> v4:
> - add Reviewed-by: Dmitry Vyukov
>
> v3:
> - per Dmitry Vyukov's request, add better comments in
> scripts/module.lds.S and lib/Kconfig.debug
> - add -sanitizer-coverage-drop-ctors to scripts/Makefile.kcov
> to drop the unwanted constructors emitting unsupported relocations
> - merge the __sancov_guards section into .bss
>
> v2:
> - Address comments by Dmitry Vyukov
> - rename CONFIG_KCOV_ENABLE_GUARDS to CONFIG_KCOV_UNIQUE
> - update commit description and config description
> - Address comments by Marco Elver
> - rename sanitizer_cov_write_subsequent() to kcov_append_to_buffer()
> - make config depend on X86_64 (via ARCH_HAS_KCOV_UNIQUE)
> - swap #ifdef branches
> - tweak config description
> - remove redundant check for CONFIG_CC_HAS_SANCOV_TRACE_PC_GUARD
>
> Change-Id: Iacb1e71fd061a82c2acadf2347bba4863b9aec39
> ---
> arch/x86/Kconfig | 1 +
> arch/x86/kernel/vmlinux.lds.S | 1 +
> include/asm-generic/vmlinux.lds.h | 13 ++++++-
> include/linux/kcov.h | 2 +
> kernel/kcov.c | 61 +++++++++++++++++++++----------
> lib/Kconfig.debug | 26 +++++++++++++
> scripts/Makefile.kcov | 7 ++++
> scripts/module.lds.S | 35 ++++++++++++++++++
> tools/objtool/check.c | 1 +
> 9 files changed, 126 insertions(+), 21 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 8bed9030ad473..0533070d24fe7 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -94,6 +94,7 @@ config X86
> select ARCH_HAS_FORTIFY_SOURCE
> select ARCH_HAS_GCOV_PROFILE_ALL
> select ARCH_HAS_KCOV if X86_64
> + select ARCH_HAS_KCOV_UNIQUE if X86_64
> select ARCH_HAS_KERNEL_FPU_SUPPORT
> select ARCH_HAS_MEM_ENCRYPT
> select ARCH_HAS_MEMBARRIER_SYNC_CORE
> diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
> index 4fa0be732af10..52fe6539b9c91 100644
> --- a/arch/x86/kernel/vmlinux.lds.S
> +++ b/arch/x86/kernel/vmlinux.lds.S
> @@ -372,6 +372,7 @@ SECTIONS
> . = ALIGN(PAGE_SIZE);
> *(BSS_MAIN)
> BSS_DECRYPTED
> + BSS_SANCOV_GUARDS
> . = ALIGN(PAGE_SIZE);
> __bss_stop = .;
> }
> diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
> index fa5f19b8d53a0..ee78328eecade 100644
> --- a/include/asm-generic/vmlinux.lds.h
> +++ b/include/asm-generic/vmlinux.lds.h
> @@ -102,7 +102,8 @@
> * sections to be brought in with rodata.
> */
> #if defined(CONFIG_LD_DEAD_CODE_DATA_ELIMINATION) || defined(CONFIG_LTO_CLANG) || \
> -defined(CONFIG_AUTOFDO_CLANG) || defined(CONFIG_PROPELLER_CLANG)
> + defined(CONFIG_AUTOFDO_CLANG) || defined(CONFIG_PROPELLER_CLANG) || \
> + defined(CONFIG_KCOV_UNIQUE)
> #define TEXT_MAIN .text .text.[0-9a-zA-Z_]*
> #else
> #define TEXT_MAIN .text
> @@ -121,6 +122,16 @@ defined(CONFIG_AUTOFDO_CLANG) || defined(CONFIG_PROPELLER_CLANG)
> #define SBSS_MAIN .sbss
> #endif
>
> +#if defined(CONFIG_KCOV_UNIQUE)
> +/* BSS_SANCOV_GUARDS must be part of the .bss section so that it is zero-initialized. */
> +#define BSS_SANCOV_GUARDS \
> + __start___sancov_guards = .; \
> + *(__sancov_guards); \
> + __stop___sancov_guards = .;
> +#else
> +#define BSS_SANCOV_GUARDS
> +#endif
> +
> /*
> * GCC 4.5 and later have a 32 bytes section alignment for structures.
> * Except GCC 4.9, that feels the need to align on 64 bytes.
> diff --git a/include/linux/kcov.h b/include/linux/kcov.h
> index 2b3655c0f2278..2acccfa5ae9af 100644
> --- a/include/linux/kcov.h
> +++ b/include/linux/kcov.h
> @@ -107,6 +107,8 @@ typedef unsigned long long kcov_u64;
> #endif
>
> void __sanitizer_cov_trace_pc(void);
> +void __sanitizer_cov_trace_pc_guard(u32 *guard);
> +void __sanitizer_cov_trace_pc_guard_init(uint32_t *start, uint32_t *stop);
> void __sanitizer_cov_trace_cmp1(u8 arg1, u8 arg2);
> void __sanitizer_cov_trace_cmp2(u16 arg1, u16 arg2);
> void __sanitizer_cov_trace_cmp4(u32 arg1, u32 arg2);
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index 5170f367c8a1b..8154ac1c1622e 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -194,27 +194,15 @@ static notrace unsigned long canonicalize_ip(unsigned long ip)
> return ip;
> }
>
> -/*
> - * Entry point from instrumented code.
> - * This is called once per basic-block/edge.
> - */
> -void notrace __sanitizer_cov_trace_pc(void)
> +static notrace void kcov_append_to_buffer(unsigned long *area, int size,
> + unsigned long ip)
> {
> - struct task_struct *t;
> - unsigned long *area;
> - unsigned long ip = canonicalize_ip(_RET_IP_);
> - unsigned long pos;
> -
> - t = current;
> - if (!check_kcov_mode(KCOV_MODE_TRACE_PC, t))
> - return;
> -
> - area = t->kcov_state.area;
> /* The first 64-bit word is the number of subsequent PCs. */
> - pos = READ_ONCE(area[0]) + 1;
> - if (likely(pos < t->kcov_state.size)) {
> - /* Previously we write pc before updating pos. However, some
> - * early interrupt code could bypass check_kcov_mode() check
> + unsigned long pos = READ_ONCE(area[0]) + 1;
> +
> + if (likely(pos < size)) {
> + /*
> + * Some early interrupt code could bypass check_kcov_mode() check
> * and invoke __sanitizer_cov_trace_pc(). If such interrupt is
> * raised between writing pc and updating pos, the pc could be
> * overitten by the recursive __sanitizer_cov_trace_pc().
> @@ -225,7 +213,40 @@ void notrace __sanitizer_cov_trace_pc(void)
> area[pos] = ip;
> }
> }
> +
> +/*
> + * Entry point from instrumented code.
> + * This is called once per basic-block/edge.
> + */
> +#ifdef CONFIG_KCOV_UNIQUE
> +void notrace __sanitizer_cov_trace_pc_guard(u32 *guard)
> +{
> + if (!check_kcov_mode(KCOV_MODE_TRACE_PC, current))
> + return;
> +
> + kcov_append_to_buffer(current->kcov_state.area,
> + current->kcov_state.size,
> + canonicalize_ip(_RET_IP_));
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_pc_guard);
> +
> +void notrace __sanitizer_cov_trace_pc_guard_init(uint32_t *start,
> + uint32_t *stop)
> +{
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_pc_guard_init);
> +#else /* !CONFIG_KCOV_UNIQUE */
> +void notrace __sanitizer_cov_trace_pc(void)
> +{
> + if (!check_kcov_mode(KCOV_MODE_TRACE_PC, current))
> + return;
> +
> + kcov_append_to_buffer(current->kcov_state.area,
> + current->kcov_state.size,
> + canonicalize_ip(_RET_IP_));
> +}
> EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
> +#endif
>
> #ifdef CONFIG_KCOV_ENABLE_COMPARISONS
> static void notrace write_comp_data(u64 type, u64 arg1, u64 arg2, u64 ip)
> @@ -253,7 +274,7 @@ static void notrace write_comp_data(u64 type, u64 arg1, u64 arg2, u64 ip)
> start_index = 1 + count * KCOV_WORDS_PER_CMP;
> end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
> if (likely(end_pos <= max_pos)) {
> - /* See comment in __sanitizer_cov_trace_pc(). */
> + /* See comment in kcov_append_to_buffer(). */
> WRITE_ONCE(area[0], count + 1);
> barrier();
> area[start_index] = type;
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index ebe33181b6e6e..a7441f89465f3 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -2153,6 +2153,12 @@ config ARCH_HAS_KCOV
> build and run with CONFIG_KCOV. This typically requires
> disabling instrumentation for some early boot code.
>
> +config CC_HAS_SANCOV_TRACE_PC
> + def_bool $(cc-option,-fsanitize-coverage=trace-pc)
> +
> +config CC_HAS_SANCOV_TRACE_PC_GUARD
> + def_bool $(cc-option,-fsanitize-coverage=trace-pc-guard)
> +
> config KCOV
> bool "Code coverage for fuzzing"
> depends on ARCH_HAS_KCOV
> @@ -2166,6 +2172,26 @@ config KCOV
>
> For more details, see Documentation/dev-tools/kcov.rst.
>
> +config ARCH_HAS_KCOV_UNIQUE
> + bool
> + help
> + An architecture should select this when it can successfully
> + build and run with CONFIG_KCOV_UNIQUE.
> +
> +config KCOV_UNIQUE
> + depends on KCOV
> + depends on CC_HAS_SANCOV_TRACE_PC_GUARD && ARCH_HAS_KCOV_UNIQUE
> + bool "Enable unique program counter collection mode for KCOV"
> + help
> + This option enables KCOV's unique program counter (PC) collection mode,
> + which deduplicates PCs on the fly when the KCOV_UNIQUE_ENABLE ioctl is
> + used.
> +
> + This significantly reduces the memory footprint for coverage data
> + collection compared to trace mode, as it prevents the kernel from
> + storing the same PC multiple times.
> + Enabling this mode incurs a slight increase in kernel binary size.
> +
> config KCOV_ENABLE_COMPARISONS
> bool "Enable comparison operands collection by KCOV"
> depends on KCOV
> diff --git a/scripts/Makefile.kcov b/scripts/Makefile.kcov
> index 78305a84ba9d2..c3ad5504f5600 100644
> --- a/scripts/Makefile.kcov
> +++ b/scripts/Makefile.kcov
> @@ -1,5 +1,12 @@
> # SPDX-License-Identifier: GPL-2.0-only
> +ifeq ($(CONFIG_KCOV_UNIQUE),y)
> +kcov-flags-y += -fsanitize-coverage=trace-pc-guard
> +# Drop per-file constructors that -fsanitize-coverage=trace-pc-guard inserts by default.
> +# Kernel does not need them, and they may produce unknown relocations.
> +kcov-flags-y += -mllvm -sanitizer-coverage-drop-ctors
> +else
> kcov-flags-y += -fsanitize-coverage=trace-pc
> +endif
> kcov-flags-$(CONFIG_KCOV_ENABLE_COMPARISONS) += -fsanitize-coverage=trace-cmp
>
> kcov-rflags-y += -Cpasses=sancov-module
> diff --git a/scripts/module.lds.S b/scripts/module.lds.S
> index 450f1088d5fd3..17f36d5112c5d 100644
> --- a/scripts/module.lds.S
> +++ b/scripts/module.lds.S
> @@ -47,6 +47,7 @@ SECTIONS {
> .bss : {
> *(.bss .bss.[0-9a-zA-Z_]*)
> *(.bss..L*)
> + *(__sancov_guards)
This line looks like redundant?
I can boot without it both normal build and kasan build.
> }
>
> .data : {
> @@ -64,6 +65,40 @@ SECTIONS {
> MOD_CODETAG_SECTIONS()
> }
> #endif
> +
> +#ifdef CONFIG_KCOV_UNIQUE
> + /*
> + * CONFIG_KCOV_UNIQUE creates COMDAT groups for instrumented functions,
> + * which has the following consequences in the presence of
> + * -ffunction-sections:
> + * - Separate .init.text and .exit.text sections in the modules are not
> + * merged together, which results in errors trying to create
> + * duplicate entries in /sys/module/MODNAME/sections/ at module load
> + * time.
> + * - Each function is placed in a separate .text.funcname section, so
> + * there is no .text section anymore. Collecting them together here
> + * has mostly aesthetic purpose, although some tools may be expecting
> + * it to be present.
> + */
> + .text : {
> + *(.text .text.[0-9a-zA-Z_]*)
> + *(.text..L*)
> + }
> + .init.text : {
> + *(.init.text .init.text.[0-9a-zA-Z_]*)
> + *(.init.text..L*)
> + }
> + .exit.text : {
> + *(.exit.text .exit.text.[0-9a-zA-Z_]*)
> + *(.exit.text..L*)
> + }
> + .bss : {
> + *(.bss .bss.[0-9a-zA-Z_]*)
> + *(.bss..L*)
> + *(__sancov_guards)
Need to include __start___sancov_guards and __stop___sancov_guards to treat them as local,
otherwise it won't boot on aarch64, error like:
Modules: module proxy_consumer: overflow in relocation type 311 val 0.
So, finally it should look like:
__start___sancov_guards = .;
*(__sancov_guards)
__stop___sancov_guards = .;
> + }
> +#endif
> +
> MOD_SEPARATE_CODETAG_SECTIONS()
> }
>
> diff --git a/tools/objtool/check.c b/tools/objtool/check.c
> index 67d76f3a1dce5..60eb5faa27d28 100644
> --- a/tools/objtool/check.c
> +++ b/tools/objtool/check.c
> @@ -1156,6 +1156,7 @@ static const char *uaccess_safe_builtin[] = {
> "write_comp_data",
> "check_kcov_mode",
> "__sanitizer_cov_trace_pc",
> + "__sanitizer_cov_trace_pc_guard",
> "__sanitizer_cov_trace_const_cmp1",
> "__sanitizer_cov_trace_const_cmp2",
> "__sanitizer_cov_trace_const_cmp4",
> --
> 2.50.1.552.g942d659e1b-goog
>
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-08-26 8:15 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-31 11:51 [PATCH v4 00/10] Coverage deduplication for KCOV Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 01/10] x86: kcov: disable instrumentation of arch/x86/kernel/tsc.c Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 02/10] kcov: elaborate on using the shared buffer Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 03/10] kcov: factor out struct kcov_state Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 04/10] mm/kasan: define __asan_before_dynamic_init, __asan_after_dynamic_init Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 05/10] kcov: x86: introduce CONFIG_KCOV_UNIQUE Alexander Potapenko
2025-08-26 8:14 ` Joey Jiao
2025-07-31 11:51 ` [PATCH v4 06/10] kcov: add trace and trace_size to struct kcov_state Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 07/10] kcov: add ioctl(KCOV_UNIQUE_ENABLE) Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 08/10] kcov: add ioctl(KCOV_RESET_TRACE) Alexander Potapenko
2025-07-31 11:51 ` [PATCH v4 09/10] kcov: selftests: add kcov_test Alexander Potapenko
2025-08-01 4:10 ` Dmitry Vyukov
2025-07-31 11:51 ` [PATCH v4 10/10] kcov: use enum kcov_mode in kcov_mode_enabled() Alexander Potapenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).