* [PATCH 0/2] Improve iteration through sptes from rmap
@ 2012-03-21 14:48 Takuya Yoshikawa
2012-03-21 14:49 ` [PATCH 1/2] KVM: MMU: Make pte_list_desc fit cache lines well Takuya Yoshikawa
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Takuya Yoshikawa @ 2012-03-21 14:48 UTC (permalink / raw)
To: avi, mtosatti; +Cc: kvm
By removing sptep from rmap_iterator, I could achieve 15% performance
improvement without inlining.
Takuya Yoshikawa (3):
KVM: MMU: Make pte_list_desc fit cache lines well
KVM: MMU: Improve iteration through sptes from rmap
--
1.7.5.4
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/2] KVM: MMU: Make pte_list_desc fit cache lines well
2012-03-21 14:48 [PATCH 0/2] Improve iteration through sptes from rmap Takuya Yoshikawa
@ 2012-03-21 14:49 ` Takuya Yoshikawa
2012-04-08 13:09 ` Avi Kivity
2012-03-21 14:50 ` [PATCH 2/2] KVM: MMU: Improve iteration through sptes from rmap Takuya Yoshikawa
` (2 subsequent siblings)
3 siblings, 1 reply; 7+ messages in thread
From: Takuya Yoshikawa @ 2012-03-21 14:49 UTC (permalink / raw)
To: avi, mtosatti; +Cc: kvm
From: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
We have PTE_LIST_EXT + 1 pointers in this structure and these 40/20
bytes do not fit cache lines well. Furthermore, some allocators may
use 64/32-byte objects for the pte_list_desc cache.
This patch solves this problem by changing PTE_LIST_EXT from 4 to 3.
For shadow paging, the new size is still large enough to hold both the
kernel and process mappings for usual anonymous pages. For file
mappings, there may be a slight change in the cache usage.
Note: with EPT/NPT we almost always have a single spte in each reverse
mapping and we will not see any change by this.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index dc5f245..3213348 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -135,8 +135,6 @@ module_param(dbg, bool, 0644);
#define PT64_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | PT_USER_MASK \
| PT64_NX_MASK)
-#define PTE_LIST_EXT 4
-
#define ACC_EXEC_MASK 1
#define ACC_WRITE_MASK PT_WRITABLE_MASK
#define ACC_USER_MASK PT_USER_MASK
@@ -151,6 +149,9 @@ module_param(dbg, bool, 0644);
#define SHADOW_PT_INDEX(addr, level) PT64_INDEX(addr, level)
+/* make pte_list_desc fit well in cache line */
+#define PTE_LIST_EXT 3
+
struct pte_list_desc {
u64 *sptes[PTE_LIST_EXT];
struct pte_list_desc *more;
--
1.7.5.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/2] KVM: MMU: Improve iteration through sptes from rmap
2012-03-21 14:48 [PATCH 0/2] Improve iteration through sptes from rmap Takuya Yoshikawa
2012-03-21 14:49 ` [PATCH 1/2] KVM: MMU: Make pte_list_desc fit cache lines well Takuya Yoshikawa
@ 2012-03-21 14:50 ` Takuya Yoshikawa
2012-04-04 12:34 ` [PATCH 0/2] " Takuya Yoshikawa
2012-04-08 13:08 ` Avi Kivity
3 siblings, 0 replies; 7+ messages in thread
From: Takuya Yoshikawa @ 2012-03-21 14:50 UTC (permalink / raw)
To: avi, mtosatti; +Cc: kvm
From: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Iteration using rmap_next(), the actual body is pte_list_next(), is
inefficient: every time we call it we start from checking whether rmap
holds a single spte or points to a descriptor which links more sptes.
In the case of shadow paging, this quadratic total iteration cost is a
problem. Even for two dimensional paging, with EPT/NPT on, in which we
almost always have a single mapping, the extra checks at the end of the
iteration should be eliminated.
This patch fixes this by introducing rmap_iterator which keeps the
iteration context for the next search. Furthermore the implementation
of rmap_next() is splitted into two functions, rmap_get_first() and
rmap_get_next(), to avoid repeatedly checking whether the rmap being
iterated on has only one spte.
Although there seemed to be only a slight change for EPT/NPT, the actual
improvement was significant: we observed that GET_DIRTY_LOG for 1GB
dirty memory became 15% faster than before. This is probably because
the new code is easy to make branch predictions.
Note: we just remove pte_list_next() because we can think of parent_ptes
as a reverse mapping.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 196 ++++++++++++++++++++++++++++------------------
arch/x86/kvm/mmu_audit.c | 10 +-
2 files changed, 124 insertions(+), 82 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3213348..29ad6f9 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -842,32 +842,6 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte,
return count;
}
-static u64 *pte_list_next(unsigned long *pte_list, u64 *spte)
-{
- struct pte_list_desc *desc;
- u64 *prev_spte;
- int i;
-
- if (!*pte_list)
- return NULL;
- else if (!(*pte_list & 1)) {
- if (!spte)
- return (u64 *)*pte_list;
- return NULL;
- }
- desc = (struct pte_list_desc *)(*pte_list & ~1ul);
- prev_spte = NULL;
- while (desc) {
- for (i = 0; i < PTE_LIST_EXT && desc->sptes[i]; ++i) {
- if (prev_spte == spte)
- return desc->sptes[i];
- prev_spte = desc->sptes[i];
- }
- desc = desc->more;
- }
- return NULL;
-}
-
static void
pte_list_desc_remove_entry(unsigned long *pte_list, struct pte_list_desc *desc,
int i, struct pte_list_desc *prev_desc)
@@ -988,11 +962,6 @@ static int rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn)
return pte_list_add(vcpu, spte, rmapp);
}
-static u64 *rmap_next(unsigned long *rmapp, u64 *spte)
-{
- return pte_list_next(rmapp, spte);
-}
-
static void rmap_remove(struct kvm *kvm, u64 *spte)
{
struct kvm_mmu_page *sp;
@@ -1005,6 +974,67 @@ static void rmap_remove(struct kvm *kvm, u64 *spte)
pte_list_remove(spte, rmapp);
}
+/*
+ * Used by the following functions to iterate through the sptes linked by a
+ * rmap. All fields are private and not assumed to be used outside.
+ */
+struct rmap_iterator {
+ /* private fields */
+ struct pte_list_desc *desc; /* holds the sptep if not NULL */
+ int pos; /* index of the sptep */
+};
+
+/*
+ * Iteration must be started by this function. This should also be used after
+ * removing/dropping sptes from the rmap link because in such cases the
+ * information in the itererator may not be valid.
+ *
+ * Returns sptep if found, NULL otherwise.
+ */
+static u64 *rmap_get_first(unsigned long rmap, struct rmap_iterator *iter)
+{
+ if (!rmap)
+ return NULL;
+
+ if (!(rmap & 1)) {
+ iter->desc = NULL;
+ return (u64 *)rmap;
+ }
+
+ iter->desc = (struct pte_list_desc *)(rmap & ~1ul);
+ iter->pos = 0;
+ return iter->desc->sptes[iter->pos];
+}
+
+/*
+ * Must be used with a valid iterator: e.g. after rmap_get_first().
+ *
+ * Returns sptep if found, NULL otherwise.
+ */
+static u64 *rmap_get_next(struct rmap_iterator *iter)
+{
+ if (iter->desc) {
+ if (iter->pos < PTE_LIST_EXT - 1) {
+ u64 *sptep;
+
+ ++iter->pos;
+ sptep = iter->desc->sptes[iter->pos];
+ if (sptep)
+ return sptep;
+ }
+
+ iter->desc = iter->desc->more;
+
+ if (iter->desc) {
+ iter->pos = 0;
+ /* desc->sptes[0] cannot be NULL */
+ return iter->desc->sptes[iter->pos];
+ }
+ }
+
+ return NULL;
+}
+
static void drop_spte(struct kvm *kvm, u64 *sptep)
{
if (mmu_spte_clear_track_bits(sptep))
@@ -1013,23 +1043,27 @@ static void drop_spte(struct kvm *kvm, u64 *sptep)
static int __rmap_write_protect(struct kvm *kvm, unsigned long *rmapp, int level)
{
- u64 *spte = NULL;
+ u64 *sptep;
+ struct rmap_iterator iter;
int write_protected = 0;
- while ((spte = rmap_next(rmapp, spte))) {
- BUG_ON(!(*spte & PT_PRESENT_MASK));
- rmap_printk("rmap_write_protect: spte %p %llx\n", spte, *spte);
+ for (sptep = rmap_get_first(*rmapp, &iter); sptep;) {
+ BUG_ON(!(*sptep & PT_PRESENT_MASK));
+ rmap_printk("rmap_write_protect: spte %p %llx\n", sptep, *sptep);
- if (!is_writable_pte(*spte))
+ if (!is_writable_pte(*sptep)) {
+ sptep = rmap_get_next(&iter);
continue;
+ }
if (level == PT_PAGE_TABLE_LEVEL) {
- mmu_spte_update(spte, *spte & ~PT_WRITABLE_MASK);
+ mmu_spte_update(sptep, *sptep & ~PT_WRITABLE_MASK);
+ sptep = rmap_get_next(&iter);
} else {
- BUG_ON(!is_large_pte(*spte));
- drop_spte(kvm, spte);
+ BUG_ON(!is_large_pte(*sptep));
+ drop_spte(kvm, sptep);
--kvm->stat.lpages;
- spte = NULL;
+ sptep = rmap_get_first(*rmapp, &iter);
}
write_protected = 1;
@@ -1084,48 +1118,57 @@ static int rmap_write_protect(struct kvm *kvm, u64 gfn)
static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp,
unsigned long data)
{
- u64 *spte;
+ u64 *sptep;
+ struct rmap_iterator iter;
int need_tlb_flush = 0;
- while ((spte = rmap_next(rmapp, NULL))) {
- BUG_ON(!(*spte & PT_PRESENT_MASK));
- rmap_printk("kvm_rmap_unmap_hva: spte %p %llx\n", spte, *spte);
- drop_spte(kvm, spte);
+ while ((sptep = rmap_get_first(*rmapp, &iter))) {
+ BUG_ON(!(*sptep & PT_PRESENT_MASK));
+ rmap_printk("kvm_rmap_unmap_hva: spte %p %llx\n", sptep, *sptep);
+
+ drop_spte(kvm, sptep);
need_tlb_flush = 1;
}
+
return need_tlb_flush;
}
static int kvm_set_pte_rmapp(struct kvm *kvm, unsigned long *rmapp,
unsigned long data)
{
+ u64 *sptep;
+ struct rmap_iterator iter;
int need_flush = 0;
- u64 *spte, new_spte;
+ u64 new_spte;
pte_t *ptep = (pte_t *)data;
pfn_t new_pfn;
WARN_ON(pte_huge(*ptep));
new_pfn = pte_pfn(*ptep);
- spte = rmap_next(rmapp, NULL);
- while (spte) {
- BUG_ON(!is_shadow_present_pte(*spte));
- rmap_printk("kvm_set_pte_rmapp: spte %p %llx\n", spte, *spte);
+
+ for (sptep = rmap_get_first(*rmapp, &iter); sptep;) {
+ BUG_ON(!is_shadow_present_pte(*sptep));
+ rmap_printk("kvm_set_pte_rmapp: spte %p %llx\n", sptep, *sptep);
+
need_flush = 1;
+
if (pte_write(*ptep)) {
- drop_spte(kvm, spte);
- spte = rmap_next(rmapp, NULL);
+ drop_spte(kvm, sptep);
+ sptep = rmap_get_first(*rmapp, &iter);
} else {
- new_spte = *spte &~ (PT64_BASE_ADDR_MASK);
+ new_spte = *sptep & ~PT64_BASE_ADDR_MASK;
new_spte |= (u64)new_pfn << PAGE_SHIFT;
new_spte &= ~PT_WRITABLE_MASK;
new_spte &= ~SPTE_HOST_WRITEABLE;
new_spte &= ~shadow_accessed_mask;
- mmu_spte_clear_track_bits(spte);
- mmu_spte_set(spte, new_spte);
- spte = rmap_next(rmapp, spte);
+
+ mmu_spte_clear_track_bits(sptep);
+ mmu_spte_set(sptep, new_spte);
+ sptep = rmap_get_next(&iter);
}
}
+
if (need_flush)
kvm_flush_remote_tlbs(kvm);
@@ -1184,7 +1227,8 @@ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
unsigned long data)
{
- u64 *spte;
+ u64 *sptep;
+ struct rmap_iterator iter;
int young = 0;
/*
@@ -1197,25 +1241,24 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
if (!shadow_accessed_mask)
return kvm_unmap_rmapp(kvm, rmapp, data);
- spte = rmap_next(rmapp, NULL);
- while (spte) {
- int _young;
- u64 _spte = *spte;
- BUG_ON(!(_spte & PT_PRESENT_MASK));
- _young = _spte & PT_ACCESSED_MASK;
- if (_young) {
+ for (sptep = rmap_get_first(*rmapp, &iter); sptep;
+ sptep = rmap_get_next(&iter)) {
+ BUG_ON(!(*sptep & PT_PRESENT_MASK));
+
+ if (*sptep & PT_ACCESSED_MASK) {
young = 1;
- clear_bit(PT_ACCESSED_SHIFT, (unsigned long *)spte);
+ clear_bit(PT_ACCESSED_SHIFT, (unsigned long *)sptep);
}
- spte = rmap_next(rmapp, spte);
}
+
return young;
}
static int kvm_test_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
unsigned long data)
{
- u64 *spte;
+ u64 *sptep;
+ struct rmap_iterator iter;
int young = 0;
/*
@@ -1226,16 +1269,14 @@ static int kvm_test_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
if (!shadow_accessed_mask)
goto out;
- spte = rmap_next(rmapp, NULL);
- while (spte) {
- u64 _spte = *spte;
- BUG_ON(!(_spte & PT_PRESENT_MASK));
- young = _spte & PT_ACCESSED_MASK;
- if (young) {
+ for (sptep = rmap_get_first(*rmapp, &iter); sptep;
+ sptep = rmap_get_next(&iter)) {
+ BUG_ON(!(*sptep & PT_PRESENT_MASK));
+
+ if (*sptep & PT_ACCESSED_MASK) {
young = 1;
break;
}
- spte = rmap_next(rmapp, spte);
}
out:
return young;
@@ -1887,10 +1928,11 @@ static void kvm_mmu_put_page(struct kvm_mmu_page *sp, u64 *parent_pte)
static void kvm_mmu_unlink_parents(struct kvm *kvm, struct kvm_mmu_page *sp)
{
- u64 *parent_pte;
+ u64 *sptep;
+ struct rmap_iterator iter;
- while ((parent_pte = pte_list_next(&sp->parent_ptes, NULL)))
- drop_parent_pte(sp, parent_pte);
+ while ((sptep = rmap_get_first(sp->parent_ptes, &iter)))
+ drop_parent_pte(sp, sptep);
}
static int mmu_zap_unsync_children(struct kvm *kvm,
diff --git a/arch/x86/kvm/mmu_audit.c b/arch/x86/kvm/mmu_audit.c
index 6eabae3..785e7d2 100644
--- a/arch/x86/kvm/mmu_audit.c
+++ b/arch/x86/kvm/mmu_audit.c
@@ -192,7 +192,8 @@ static void audit_write_protection(struct kvm *kvm, struct kvm_mmu_page *sp)
{
struct kvm_memory_slot *slot;
unsigned long *rmapp;
- u64 *spte;
+ u64 *sptep;
+ struct rmap_iterator iter;
if (sp->role.direct || sp->unsync || sp->role.invalid)
return;
@@ -200,13 +201,12 @@ static void audit_write_protection(struct kvm *kvm, struct kvm_mmu_page *sp)
slot = gfn_to_memslot(kvm, sp->gfn);
rmapp = &slot->rmap[sp->gfn - slot->base_gfn];
- spte = rmap_next(rmapp, NULL);
- while (spte) {
- if (is_writable_pte(*spte))
+ for (sptep = rmap_get_first(*rmapp, &iter); sptep;
+ sptep = rmap_get_next(&iter)) {
+ if (is_writable_pte(*sptep))
audit_printk(kvm, "shadow page has writable "
"mappings: gfn %llx role %x\n",
sp->gfn, sp->role.word);
- spte = rmap_next(rmapp, spte);
}
}
--
1.7.5.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 0/2] Improve iteration through sptes from rmap
2012-03-21 14:48 [PATCH 0/2] Improve iteration through sptes from rmap Takuya Yoshikawa
2012-03-21 14:49 ` [PATCH 1/2] KVM: MMU: Make pte_list_desc fit cache lines well Takuya Yoshikawa
2012-03-21 14:50 ` [PATCH 2/2] KVM: MMU: Improve iteration through sptes from rmap Takuya Yoshikawa
@ 2012-04-04 12:34 ` Takuya Yoshikawa
2012-04-08 13:08 ` Avi Kivity
3 siblings, 0 replies; 7+ messages in thread
From: Takuya Yoshikawa @ 2012-04-04 12:34 UTC (permalink / raw)
To: Takuya Yoshikawa; +Cc: avi, mtosatti, kvm
On Wed, 21 Mar 2012 23:48:23 +0900
Takuya Yoshikawa <takuya.yoshikawa@gmail.com> wrote:
> By removing sptep from rmap_iterator, I could achieve 15% performance
> improvement without inlining.
>
> Takuya Yoshikawa (3):
> KVM: MMU: Make pte_list_desc fit cache lines well
> KVM: MMU: Improve iteration through sptes from rmap
>
ping
Takuya
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 0/2] Improve iteration through sptes from rmap
2012-03-21 14:48 [PATCH 0/2] Improve iteration through sptes from rmap Takuya Yoshikawa
` (2 preceding siblings ...)
2012-04-04 12:34 ` [PATCH 0/2] " Takuya Yoshikawa
@ 2012-04-08 13:08 ` Avi Kivity
3 siblings, 0 replies; 7+ messages in thread
From: Avi Kivity @ 2012-04-08 13:08 UTC (permalink / raw)
To: Takuya Yoshikawa; +Cc: mtosatti, kvm
On 03/21/2012 04:48 PM, Takuya Yoshikawa wrote:
> By removing sptep from rmap_iterator, I could achieve 15% performance
> improvement without inlining.
>
>
Thanks, applied to next-candidate.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] KVM: MMU: Make pte_list_desc fit cache lines well
2012-03-21 14:49 ` [PATCH 1/2] KVM: MMU: Make pte_list_desc fit cache lines well Takuya Yoshikawa
@ 2012-04-08 13:09 ` Avi Kivity
2012-04-08 14:50 ` Takuya Yoshikawa
0 siblings, 1 reply; 7+ messages in thread
From: Avi Kivity @ 2012-04-08 13:09 UTC (permalink / raw)
To: Takuya Yoshikawa; +Cc: mtosatti, kvm
On 03/21/2012 04:49 PM, Takuya Yoshikawa wrote:
> From: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
>
> We have PTE_LIST_EXT + 1 pointers in this structure and these 40/20
> bytes do not fit cache lines well. Furthermore, some allocators may
> use 64/32-byte objects for the pte_list_desc cache.
>
> This patch solves this problem by changing PTE_LIST_EXT from 4 to 3.
>
> For shadow paging, the new size is still large enough to hold both the
> kernel and process mappings for usual anonymous pages. For file
> mappings, there may be a slight change in the cache usage.
>
> Note: with EPT/NPT we almost always have a single spte in each reverse
> mapping and we will not see any change by this.
>
> @@ -135,8 +135,6 @@ module_param(dbg, bool, 0644);
> #define PT64_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | PT_USER_MASK \
> | PT64_NX_MASK)
>
> -#define PTE_LIST_EXT 4
> -
> #define ACC_EXEC_MASK 1
> #define ACC_WRITE_MASK PT_WRITABLE_MASK
> #define ACC_USER_MASK PT_USER_MASK
> @@ -151,6 +149,9 @@ module_param(dbg, bool, 0644);
>
> #define SHADOW_PT_INDEX(addr, level) PT64_INDEX(addr, level)
>
> +/* make pte_list_desc fit well in cache line */
> +#define PTE_LIST_EXT 3
> +
> struct pte_list_desc {
> u64 *sptes[PTE_LIST_EXT];
> struct pte_list_desc *more;
We could go even further and have 4 pointers, and use bit 0 to decide
whether it's a next pointer or an sptep.
Not sure it's worth the extra complexity.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] KVM: MMU: Make pte_list_desc fit cache lines well
2012-04-08 13:09 ` Avi Kivity
@ 2012-04-08 14:50 ` Takuya Yoshikawa
0 siblings, 0 replies; 7+ messages in thread
From: Takuya Yoshikawa @ 2012-04-08 14:50 UTC (permalink / raw)
To: Avi Kivity; +Cc: mtosatti, kvm
On Sun, 08 Apr 2012 16:09:58 +0300
Avi Kivity <avi@redhat.com> wrote:
> > +/* make pte_list_desc fit well in cache line */
> > +#define PTE_LIST_EXT 3
> > +
> > struct pte_list_desc {
> > u64 *sptes[PTE_LIST_EXT];
> > struct pte_list_desc *more;
>
> We could go even further and have 4 pointers, and use bit 0 to decide
> whether it's a next pointer or an sptep.
>
> Not sure it's worth the extra complexity.
>
May not be so complex if we reuse rmap encoding/decoding but it is
for shadow paging only hack ... and mmu is already enough complex;
so not sure - I need to think again later.
My primary goal is to make the code saner and faster.
Takuya
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2012-04-08 14:50 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-03-21 14:48 [PATCH 0/2] Improve iteration through sptes from rmap Takuya Yoshikawa
2012-03-21 14:49 ` [PATCH 1/2] KVM: MMU: Make pte_list_desc fit cache lines well Takuya Yoshikawa
2012-04-08 13:09 ` Avi Kivity
2012-04-08 14:50 ` Takuya Yoshikawa
2012-03-21 14:50 ` [PATCH 2/2] KVM: MMU: Improve iteration through sptes from rmap Takuya Yoshikawa
2012-04-04 12:34 ` [PATCH 0/2] " Takuya Yoshikawa
2012-04-08 13:08 ` Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox