From: Carlos Llamas <cmllamas@google.com>
To: Sami Tolvanen <samitolvanen@google.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Josh Poimboeuf <jpoimboe@kernel.org>,
Jason Baron <jbaron@akamai.com>,
Alice Ryhl <aliceryhl@google.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ard Biesheuvel <ardb@kernel.org>, Ingo Molnar <mingo@redhat.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Namhyung Kim <namhyung@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Jiri Olsa <jolsa@kernel.org>, Ian Rogers <irogers@google.com>,
Adrian Hunter <adrian.hunter@intel.com>,
James Clark <james.clark@linaro.org>,
Juri Lelli <juri.lelli@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Valentin Schneider <vschneid@redhat.com>,
Kees Cook <kees@kernel.org>, Linus Walleij <linusw@kernel.org>,
"Borislav Petkov (AMD)" <bp@alien8.de>,
Nathan Chancellor <nathan@kernel.org>,
Thomas Gleixner <tglx@kernel.org>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>,
Jens Remus <jremus@linux.ibm.com>,
Juergen Gross <jgross@suse.com>,
Carlos Llamas <cmllamas@google.com>,
Conor Dooley <conor.dooley@microchip.com>,
David Kaplan <david.kaplan@amd.com>,
Lukas Bulwahn <lukas.bulwahn@redhat.com>,
Jinjie Ruan <ruanjinjie@huawei.com>,
James Morse <james.morse@arm.com>,
Thomas Huth <thuth@redhat.com>,
Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>
Cc: kernel-team@android.com, linux-kernel@vger.kernel.org,
"Will McVicker" <willmcvicker@google.com>,
"Thomas Weißschuh" <thomas.weissschuh@linutronix.de>,
"moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)"
<linux-arm-kernel@lists.infradead.org>,
"open list:PERFORMANCE EVENTS SUBSYSTEM"
<linux-perf-users@vger.kernel.org>
Subject: [PATCH] static_call: use CFI-compliant return0 stubs
Date: Wed, 11 Mar 2026 22:57:40 +0000 [thread overview]
Message-ID: <20260311225822.1565895-1-cmllamas@google.com> (raw)
In-Reply-To: <20260309223156.GA73501@google.com>
Architectures with !HAVE_STATIC_CALL (such as arm64) rely on the generic
static_call implementation via indirect calls. In particular, users of
DEFINE_STATIC_CALL_RET0, default to the generic __static_call_return0
stub to optimize the unset path.
However, __static_call_return0 has a fixed signature of "long (*)(void)"
which may not match the expected prototype at callsites. This triggers
CFI failures when CONFIG_CFI is enabled. A trivial linux-perf command
does it:
$ perf record -a sleep 1
CFI failure at perf_prepare_sample+0x98/0x7f8 (target: __static_call_return0+0x0/0x10; expected type: 0x837de525)
Internal error: Oops - CFI: 00000000f2008228 [#1] SMP
Modules linked in:
CPU: 0 UID: 0 PID: 638 Comm: perf Not tainted 7.0.0-rc3 #25 PREEMPT
Hardware name: linux,dummy-virt (DT)
pstate: 900000c5 (NzcV daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : perf_prepare_sample+0x98/0x7f8
lr : perf_prepare_sample+0x70/0x7f8
sp : ffff80008289bc20
x29: ffff80008289bc30 x28: 000000000000001f x27: 0000000000000018
x26: 0000000000000100 x25: ffffffffffffffff x24: 0000000000000000
x23: 0000000000010187 x22: ffff8000851eba40 x21: 0000000000010087
x20: ffff0000098c9ea0 x19: ffff80008289bdc0 x18: 0000000000000000
x17: 00000000837de525 x16: 0000000072923c8f x15: 7fffffffffffffff
x14: 00007fffffffffff x13: 00000000ffffffea x12: 0000000000000000
x11: 0000000000000015 x10: 0000000000000000 x9 : ffff8000822f2240
x8 : ffff800080276e4c x7 : 0000000000000000 x6 : 0000000000000000
x5 : 0000000000000000 x4 : ffff8000851eba10 x3 : ffff8000851eba40
x2 : ffff8000822f2240 x1 : 0000000000000000 x0 : 00000009d377c3a0
Call trace:
perf_prepare_sample+0x98/0x7f8 (P)
perf_event_output_forward+0x5c/0x17c
__perf_event_overflow+0x2fc/0x460
perf_event_overflow+0x1c/0x28
armv8pmu_handle_irq+0x134/0x210
[...]
To fix this, let architectures provide an ARCH_DEFINE_TYPED_STUB_RET0
implementation that generates individual signature-matching stubs for
users of DEFINE_STATIC_CALL_RET0. This ensures the CFI hash of the
target call matches that of the callsite.
Cc: Sami Tolvanen <samitolvanen@google.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Kees Cook <kees@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will McVicker <willmcvicker@google.com>
Fixes: 87b940a0675e ("perf/core: Use static_call to optimize perf_guest_info_callbacks")
Closes: https://lore.kernel.org/all/YfrQzoIWyv9lNljh@google.com/
Suggested-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
---
arch/Kconfig | 4 ++++
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/linkage.h | 3 ++-
arch/arm64/include/asm/static_call.h | 23 +++++++++++++++++++++++
include/linux/static_call.h | 19 ++++++++++++++++++-
kernel/events/core.c | 11 +++++++----
kernel/sched/core.c | 4 ++--
7 files changed, 57 insertions(+), 8 deletions(-)
create mode 100644 arch/arm64/include/asm/static_call.h
diff --git a/arch/Kconfig b/arch/Kconfig
index 102ddbd4298e..7735d548f02e 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1678,6 +1678,10 @@ config HAVE_STATIC_CALL_INLINE
depends on HAVE_STATIC_CALL
select OBJTOOL
+config HAVE_STATIC_CALL_TYPED_STUBS
+ bool
+ depends on !HAVE_STATIC_CALL
+
config HAVE_PREEMPT_DYNAMIC
bool
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 38dba5f7e4d2..b370c31a23cf 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -252,6 +252,7 @@ config ARM64
select HAVE_RSEQ
select HAVE_RUST if RUSTC_SUPPORTS_ARM64
select HAVE_STACKPROTECTOR
+ select HAVE_STATIC_CALL_TYPED_STUBS if CFI
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_KPROBES
select HAVE_KRETPROBES
diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h
index 40bd17add539..5625ea365d27 100644
--- a/arch/arm64/include/asm/linkage.h
+++ b/arch/arm64/include/asm/linkage.h
@@ -4,9 +4,10 @@
#ifdef __ASSEMBLER__
#include <asm/assembler.h>
#endif
+#include <linux/stringify.h>
#define __ALIGN .balign CONFIG_FUNCTION_ALIGNMENT
-#define __ALIGN_STR ".balign " #CONFIG_FUNCTION_ALIGNMENT
+#define __ALIGN_STR __stringify(__ALIGN)
/*
* When using in-kernel BTI we need to ensure that PCS-conformant
diff --git a/arch/arm64/include/asm/static_call.h b/arch/arm64/include/asm/static_call.h
new file mode 100644
index 000000000000..ef754b58b1c9
--- /dev/null
+++ b/arch/arm64/include/asm/static_call.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_ARM64_STATIC_CALL_H
+#define _ASM_ARM64_STATIC_CALL_H
+
+#include <linux/compiler.h>
+#include <asm/linkage.h>
+
+/* Generates a CFI-compliant "return 0" stub matching @reffunc signature */
+#define __ARCH_DEFINE_TYPED_STUB_RET0(name, reffunc) \
+ typeof(reffunc) name; \
+ __ADDRESSABLE(name); \
+ asm( \
+ " " __ALIGN_STR " \n" \
+ " .4byte __kcfi_typeid_" #name "\n" \
+ #name ": \n" \
+ " bti c \n" \
+ " mov x0, xzr \n" \
+ " ret" \
+ );
+#define ARCH_DEFINE_TYPED_STUB_RET0(name, reffunc) \
+ __ARCH_DEFINE_TYPED_STUB_RET0(name, reffunc)
+
+#endif /* _ASM_ARM64_STATIC_CALL_H */
diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index 78a77a4ae0ea..6cb44441dfe0 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -184,6 +184,8 @@ extern int static_call_text_reserved(void *start, void *end);
extern long __static_call_return0(void);
+#define STATIC_CALL_STUB_RET0(...) ((void *)&__static_call_return0)
+
#define DEFINE_STATIC_CALL(name, _func) \
DECLARE_STATIC_CALL(name, _func); \
struct static_call_key STATIC_CALL_KEY(name) = { \
@@ -270,6 +272,8 @@ static inline int static_call_text_reserved(void *start, void *end)
extern long __static_call_return0(void);
+#define STATIC_CALL_STUB_RET0(...) ((void *)&__static_call_return0)
+
#define EXPORT_STATIC_CALL(name) \
EXPORT_SYMBOL(STATIC_CALL_KEY(name)); \
EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
@@ -294,6 +298,18 @@ static inline long __static_call_return0(void)
return 0;
}
+#ifdef CONFIG_HAVE_STATIC_CALL_TYPED_STUBS
+#include <asm/static_call.h>
+
+#define STATIC_CALL_STUB_RET0(name) __static_call_##name
+#define DEFINE_STATIC_CALL_STUB_RET0(name, _func) \
+ ARCH_DEFINE_TYPED_STUB_RET0(STATIC_CALL_STUB_RET0(name), _func)
+#else
+/* Fall back to the generic __static_call_return0 stub */
+#define STATIC_CALL_STUB_RET0(...) ((void *)&__static_call_return0)
+#define DEFINE_STATIC_CALL_STUB_RET0(...)
+#endif
+
#define __DEFINE_STATIC_CALL(name, _func, _func_init) \
DECLARE_STATIC_CALL(name, _func); \
struct static_call_key STATIC_CALL_KEY(name) = { \
@@ -307,7 +323,8 @@ static inline long __static_call_return0(void)
__DEFINE_STATIC_CALL(name, _func, NULL)
#define DEFINE_STATIC_CALL_RET0(name, _func) \
- __DEFINE_STATIC_CALL(name, _func, __static_call_return0)
+ DEFINE_STATIC_CALL_STUB_RET0(name, _func) \
+ __DEFINE_STATIC_CALL(name, _func, STATIC_CALL_STUB_RET0(name))
static inline void __static_call_nop(void) { }
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 1f5699b339ec..6ac00e89d320 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7695,16 +7695,19 @@ void perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *cbs)
}
EXPORT_SYMBOL_GPL(perf_register_guest_info_callbacks);
+#define static_call_disable(name) \
+ static_call_update(name, STATIC_CALL_STUB_RET0(name))
+
void perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *cbs)
{
if (WARN_ON_ONCE(rcu_access_pointer(perf_guest_cbs) != cbs))
return;
rcu_assign_pointer(perf_guest_cbs, NULL);
- static_call_update(__perf_guest_state, (void *)&__static_call_return0);
- static_call_update(__perf_guest_get_ip, (void *)&__static_call_return0);
- static_call_update(__perf_guest_handle_intel_pt_intr, (void *)&__static_call_return0);
- static_call_update(__perf_guest_handle_mediated_pmi, (void *)&__static_call_return0);
+ static_call_disable(__perf_guest_state);
+ static_call_disable(__perf_guest_get_ip);
+ static_call_disable(__perf_guest_handle_intel_pt_intr);
+ static_call_disable(__perf_guest_handle_mediated_pmi);
synchronize_rcu();
}
EXPORT_SYMBOL_GPL(perf_unregister_guest_info_callbacks);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b7f77c165a6e..57c441d01564 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7443,12 +7443,12 @@ EXPORT_SYMBOL(__cond_resched);
#ifdef CONFIG_PREEMPT_DYNAMIC
# ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
# define cond_resched_dynamic_enabled __cond_resched
-# define cond_resched_dynamic_disabled ((void *)&__static_call_return0)
+# define cond_resched_dynamic_disabled STATIC_CALL_STUB_RET0(cond_resched)
DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
EXPORT_STATIC_CALL_TRAMP(cond_resched);
# define might_resched_dynamic_enabled __cond_resched
-# define might_resched_dynamic_disabled ((void *)&__static_call_return0)
+# define might_resched_dynamic_disabled STATIC_CALL_STUB_RET0(might_resched)
DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
EXPORT_STATIC_CALL_TRAMP(might_resched);
# elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
--
2.53.0.473.g4a7958ca14-goog
next prev parent reply other threads:[~2026-03-11 22:58 UTC|newest]
Thread overview: 90+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-11 2:07 [PATCH v4 00/17] perf: KVM: Fix, optimize, and clean up callbacks Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 01/17] perf: Protect perf_guest_cbs with RCU Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 7:26 ` Paolo Bonzini
2021-11-11 7:26 ` Paolo Bonzini
2021-11-11 7:26 ` Paolo Bonzini
2021-11-11 10:47 ` Peter Zijlstra
2021-11-11 10:47 ` Peter Zijlstra
2021-11-11 10:47 ` Peter Zijlstra
2021-11-12 7:55 ` Paolo Bonzini
2021-11-12 7:55 ` Paolo Bonzini
2021-11-12 7:55 ` Paolo Bonzini
2021-11-11 2:07 ` [PATCH v4 02/17] KVM: x86: Register perf callbacks after calling vendor's hardware_setup() Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 03/17] KVM: x86: Register Processor Trace interrupt hook iff PT enabled in guest Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 04/17] perf: Stop pretending that perf can handle multiple guest callbacks Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 05/17] perf: Drop dead and useless guest "support" from arm, csky, nds32 and riscv Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 06/17] perf/core: Rework guest callbacks to prepare for static_call support Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 07/17] perf: Add wrappers for invoking guest callbacks Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 08/17] perf: Force architectures to opt-in to " Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 09/17] perf/core: Use static_call to optimize perf_guest_info_callbacks Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2022-02-02 18:43 ` Sean Christopherson
2022-02-04 17:35 ` Sami Tolvanen
2022-02-06 13:08 ` Peter Zijlstra
2022-02-06 18:45 ` Kees Cook
2022-02-06 20:28 ` Peter Zijlstra
2022-02-07 2:55 ` Kees Cook
2022-02-18 22:35 ` Will McVicker
2022-08-24 16:45 ` Sean Christopherson
2026-03-09 19:27 ` Carlos Llamas
2026-03-09 22:31 ` Sami Tolvanen
2026-03-10 3:26 ` Carlos Llamas
2026-03-11 22:57 ` Carlos Llamas [this message]
2026-03-11 23:14 ` [PATCH] static_call: use CFI-compliant return0 stubs Peter Zijlstra
2026-03-12 0:16 ` Carlos Llamas
2026-03-12 7:40 ` Ard Biesheuvel
2026-03-12 8:07 ` Peter Zijlstra
2026-03-12 17:18 ` Carlos Llamas
2026-03-11 23:05 ` [PATCH v4 09/17] perf/core: Use static_call to optimize perf_guest_info_callbacks Carlos Llamas
2021-11-11 2:07 ` [PATCH v4 10/17] KVM: x86: Drop current_vcpu for kvm_running_vcpu + kvm_arch_vcpu variable Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 11/17] KVM: x86: More precisely identify NMI from guest when handling PMI Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 12/17] KVM: Move x86's perf guest info callbacks to generic KVM Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 13/17] KVM: x86: Move Intel Processor Trace interrupt handler to vmx.c Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 14/17] KVM: arm64: Convert to the generic perf callbacks Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` [PATCH v4 15/17] KVM: arm64: Hide kvm_arm_pmu_available behind CONFIG_HW_PERF_EVENTS=y Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 21:49 ` Marc Zyngier
2021-11-11 21:49 ` Marc Zyngier
2021-11-11 21:49 ` Marc Zyngier
2021-11-11 2:07 ` [PATCH v4 16/17] KVM: arm64: Drop perf.c and fold its tiny bits of code into arm.c Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 21:49 ` Marc Zyngier
2021-11-11 21:49 ` Marc Zyngier
2021-11-11 21:49 ` Marc Zyngier
2021-11-11 2:07 ` [PATCH v4 17/17] perf: Drop guest callback (un)register stubs Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 2:07 ` Sean Christopherson
2021-11-11 11:19 ` [PATCH v4 00/17] perf: KVM: Fix, optimize, and clean up callbacks Peter Zijlstra
2021-11-11 11:19 ` Peter Zijlstra
2021-11-11 11:19 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260311225822.1565895-1-cmllamas@google.com \
--to=cmllamas@google.com \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=aliceryhl@google.com \
--cc=ardb@kernel.org \
--cc=bp@alien8.de \
--cc=bsegall@google.com \
--cc=catalin.marinas@arm.com \
--cc=conor.dooley@microchip.com \
--cc=david.kaplan@amd.com \
--cc=dietmar.eggemann@arm.com \
--cc=irogers@google.com \
--cc=james.clark@linaro.org \
--cc=james.morse@arm.com \
--cc=jbaron@akamai.com \
--cc=jgross@suse.com \
--cc=jolsa@kernel.org \
--cc=jpoimboe@kernel.org \
--cc=jremus@linux.ibm.com \
--cc=juri.lelli@redhat.com \
--cc=kees@kernel.org \
--cc=kernel-team@android.com \
--cc=linusw@kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=lukas.bulwahn@redhat.com \
--cc=mark.rutland@arm.com \
--cc=mathieu.desnoyers@efficios.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=nathan@kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=ruanjinjie@huawei.com \
--cc=samitolvanen@google.com \
--cc=seanjc@google.com \
--cc=tan.shaopeng@jp.fujitsu.com \
--cc=tglx@kernel.org \
--cc=thomas.weissschuh@linutronix.de \
--cc=thuth@redhat.com \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=will@kernel.org \
--cc=willmcvicker@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.