linux-sparse.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v11 00/24] kasan: add software tag-based mode for arm64
       [not found] <cover.1542648335.git.andreyknvl@google.com>
@ 2018-11-19 17:28 ` Andrey Konovalov
  2018-11-19 17:32   ` Mark Rutland
       [not found] ` <0288334225edc99d98d70c896494e19c3bd9361a.1542648335.git.andreyknvl@google.com>
       [not found] ` <356c34c9a2ae8348a6cbd1de53135a28187fa120.1542648335.git.andreyknvl@google.com>
  2 siblings, 1 reply; 6+ messages in thread
From: Andrey Konovalov @ 2018-11-19 17:28 UTC (permalink / raw)
  To: Catalin Marinas, Mark Rutland
  Cc: Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn,
	Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov,
	Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Will Deacon,
	Christoph Lameter, Andrew Morton, Nick Desaulniers, Marc Zyngier,
	Da

On Mon, Nov 19, 2018 at 6:26 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
> This patchset adds a new software tag-based mode to KASAN [1].
> (Initially this mode was called KHWASAN, but it got renamed,
>  see the naming rationale at the end of this section).
>
> The plan is to implement HWASan [2] for the kernel with the incentive,
> that it's going to have comparable to KASAN performance, but in the same
> time consume much less memory, trading that off for somewhat imprecise
> bug detection and being supported only for arm64.
>
> The underlying ideas of the approach used by software tag-based KASAN are:
>
> 1. By using the Top Byte Ignore (TBI) arm64 CPU feature, we can store
>    pointer tags in the top byte of each kernel pointer.
>
> 2. Using shadow memory, we can store memory tags for each chunk of kernel
>    memory.
>
> 3. On each memory allocation, we can generate a random tag, embed it into
>    the returned pointer and set the memory tags that correspond to this
>    chunk of memory to the same value.
>
> 4. By using compiler instrumentation, before each memory access we can add
>    a check that the pointer tag matches the tag of the memory that is being
>    accessed.
>
> 5. On a tag mismatch we report an error.
>
> With this patchset the existing KASAN mode gets renamed to generic KASAN,
> with the word "generic" meaning that the implementation can be supported
> by any architecture as it is purely software.
>
> The new mode this patchset adds is called software tag-based KASAN. The
> word "tag-based" refers to the fact that this mode uses tags embedded into
> the top byte of kernel pointers and the TBI arm64 CPU feature that allows
> to dereference such pointers. The word "software" here means that shadow
> memory manipulation and tag checking on pointer dereference is done in
> software. As it is the only tag-based implementation right now, "software
> tag-based" KASAN is sometimes referred to as simply "tag-based" in this
> patchset.
>
> A potential expansion of this mode is a hardware tag-based mode, which would
> use hardware memory tagging support (announced by Arm [3]) instead of
> compiler instrumentation and manual shadow memory manipulation.
>
> Same as generic KASAN, software tag-based KASAN is strictly a debugging
> feature.
>
> [1] https://www.kernel.org/doc/html/latest/dev-tools/kasan.html
>
> [2] http://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html
>
> [3] https://community.arm.com/processors/b/blog/posts/arm-a-profile-architecture-2018-developments-armv85a
>
>
> ====== Rationale
>
> On mobile devices generic KASAN's memory usage is significant problem. One
> of the main reasons to have tag-based KASAN is to be able to perform a
> similar set of checks as the generic one does, but with lower memory
> requirements.
>
> Comment from Vishwath Mohan <vishwath@google.com>:
>
> I don't have data on-hand, but anecdotally both ASAN and KASAN have proven
> problematic to enable for environments that don't tolerate the increased
> memory pressure well. This includes,
> (a) Low-memory form factors - Wear, TV, Things, lower-tier phones like Go,
> (c) Connected components like Pixel's visual core [1].
>
> These are both places I'd love to have a low(er) memory footprint option at
> my disposal.
>
> Comment from Evgenii Stepanov <eugenis@google.com>:
>
> Looking at a live Android device under load, slab (according to
> /proc/meminfo) + kernel stack take 8-10% available RAM (~350MB). KASAN's
> overhead of 2x - 3x on top of it is not insignificant.
>
> Not having this overhead enables near-production use - ex. running
> KASAN/KHWASAN kernel on a personal, daily-use device to catch bugs that do
> not reproduce in test configuration. These are the ones that often cost
> the most engineering time to track down.
>
> CPU overhead is bad, but generally tolerable. RAM is critical, in our
> experience. Once it gets low enough, OOM-killer makes your life miserable.
>
> [1] https://www.blog.google/products/pixel/pixel-visual-core-image-processing-and-machine-learning-pixel-2/
>
>
> ====== Technical details
>
> Software tag-based KASAN mode is implemented in a very similar way to the
> generic one. This patchset essentially does the following:
>
> 1. TCR_TBI1 is set to enable Top Byte Ignore.
>
> 2. Shadow memory is used (with a different scale, 1:16, so each shadow
>    byte corresponds to 16 bytes of kernel memory) to store memory tags.
>
> 3. All slab objects are aligned to shadow scale, which is 16 bytes.
>
> 4. All pointers returned from the slab allocator are tagged with a random
>    tag and the corresponding shadow memory is poisoned with the same value.
>
> 5. Compiler instrumentation is used to insert tag checks. Either by
>    calling callbacks or by inlining them (CONFIG_KASAN_OUTLINE and
>    CONFIG_KASAN_INLINE flags are reused).
>
> 6. When a tag mismatch is detected in callback instrumentation mode
>    KASAN simply prints a bug report. In case of inline instrumentation,
>    clang inserts a brk instruction, and KASAN has it's own brk handler,
>    which reports the bug.
>
> 7. The memory in between slab objects is marked with a reserved tag, and
>    acts as a redzone.
>
> 8. When a slab object is freed it's marked with a reserved tag.
>
> Bug detection is imprecise for two reasons:
>
> 1. We won't catch some small out-of-bounds accesses, that fall into the
>    same shadow cell, as the last byte of a slab object.
>
> 2. We only have 1 byte to store tags, which means we have a 1/256
>    probability of a tag match for an incorrect access (actually even
>    slightly less due to reserved tag values).
>
> Despite that there's a particular type of bugs that tag-based KASAN can
> detect compared to generic KASAN: use-after-free after the object has been
> allocated by someone else.
>
>
> ====== Testing
>
> Some kernel developers voiced a concern that changing the top byte of
> kernel pointers may lead to subtle bugs that are difficult to discover.
> To address this concern deliberate testing has been performed.
>
> It doesn't seem feasible to do some kind of static checking to find
> potential issues with pointer tagging, so a dynamic approach was taken.
> All pointer comparisons/subtractions have been instrumented in an LLVM
> compiler pass and a kernel module that would print a bug report whenever
> two pointers with different tags are being compared/subtracted (ignoring
> comparisons with NULL pointers and with pointers obtained by casting an
> error code to a pointer type) has been used. Then the kernel has been
> booted in QEMU and on an Odroid C2 board and syzkaller has been run.
>
> This yielded the following results.
>
> The two places that look interesting are:
>
> is_vmalloc_addr in include/linux/mm.h
> is_kernel_rodata in mm/util.c
>
> Here we compare a pointer with some fixed untagged values to make sure
> that the pointer lies in a particular part of the kernel address space.
> Since tag-based KASAN doesn't add tags to pointers that belong to rodata
> or vmalloc regions, this should work as is. To make sure debug checks to
> those two functions that check that the result doesn't change whether
> we operate on pointers with or without untagging has been added.
>
> A few other cases that don't look that interesting:
>
> Comparing pointers to achieve unique sorting order of pointee objects
> (e.g. sorting locks addresses before performing a double lock):
>
> tty_ldisc_lock_pair_timeout in drivers/tty/tty_ldisc.c
> pipe_double_lock in fs/pipe.c
> unix_state_double_lock in net/unix/af_unix.c
> lock_two_nondirectories in fs/inode.c
> mutex_lock_double in kernel/events/core.c
>
> ep_cmp_ffd in fs/eventpoll.c
> fsnotify_compare_groups fs/notify/mark.c
>
> Nothing needs to be done here, since the tags embedded into pointers
> don't change, so the sorting order would still be unique.
>
> Checks that a pointer belongs to some particular allocation:
>
> is_sibling_entry in lib/radix-tree.c
> object_is_on_stack in include/linux/sched/task_stack.h
>
> Nothing needs to be done here either, since two pointers can only belong
> to the same allocation if they have the same tag.
>
> Overall, since the kernel boots and works, there are no critical bugs.
> As for the rest, the traditional kernel testing way (use until fails) is
> the only one that looks feasible.
>
> Another point here is that tag-based KASAN is available under a separate
> config option that needs to be deliberately enabled. Even though it might
> be used in a "near-production" environment to find bugs that are not found
> during fuzzing or running tests, it is still a debug tool.
>
>
> ====== Benchmarks
>
> The following numbers were collected on Odroid C2 board. Both generic and
> tag-based KASAN were used in inline instrumentation mode.
>
> Boot time [1]:
> * ~1.7 sec for clean kernel
> * ~5.0 sec for generic KASAN
> * ~5.0 sec for tag-based KASAN
>
> Network performance [2]:
> * 8.33 Gbits/sec for clean kernel
> * 3.17 Gbits/sec for generic KASAN
> * 2.85 Gbits/sec for tag-based KASAN
>
> Slab memory usage after boot [3]:
> * ~40 kb for clean kernel
> * ~105 kb (~260% overhead) for generic KASAN
> * ~47 kb (~20% overhead) for tag-based KASAN
>
> KASAN memory overhead consists of three main parts:
> 1. Increased slab memory usage due to redzones.
> 2. Shadow memory (the whole reserved once during boot).
> 3. Quaratine (grows gradually until some preset limit; the more the limit,
>    the more the chance to detect a use-after-free).
>
> Comparing tag-based vs generic KASAN for each of these points:
> 1. 20% vs 260% overhead.
> 2. 1/16th vs 1/8th of physical memory.
> 3. Tag-based KASAN doesn't require quarantine.
>
> [1] Time before the ext4 driver is initialized.
> [2] Measured as `iperf -s & iperf -c 127.0.0.1 -t 30`.
> [3] Measured as `cat /proc/meminfo | grep Slab`.
>
>
> ====== Some notes
>
> A few notes:
>
> 1. The patchset can be found here:
>    https://github.com/xairy/kasan-prototype/tree/khwasan
>
> 2. Building requires a recent Clang version (7.0.0 or later).
>
> 3. Stack instrumentation is not supported yet and will be added later.
>
>
> ====== Changes
>
> Changes in v11:
> - Rebased onto 9ff01193 (4.20-rc3).
> - Moved KASAN_SHADOW_SCALE_SHIFT definition to arch/arm64/Makefile.
> - Added and used CC_HAS_KASAN_GENERIC and CC_HAS_KASAN_SW_TAGS configs to
>   detect compiler support.
> - New patch: "kasan: rename kasan_zero_page to kasan_early_shadow_page".
> - New patch: "arm64: move untagged_addr macro from uaccess.h to memory.h".
> - Renamed KASAN_SET_TAG/... macros in arch/arm64/include/asm/memory.h to
>   __tag_set/... and reused them later in KASAN core code instead of
>   redefining.
> - Removed tag reset from the __kimg_to_phys() macro.
> - Fixed tagged pointer handling in arm64 fault handling logic.

Hi Mark and Catalin,

I've addressed your comments, please take a look.

Thanks!

>
> Changes in v10:
> - Rebased onto 65102238 (4.20-rc1).
> - Don't ignore kasan_kmalloc() return valued in kmem_cache_alloc_trace()
>   and kmem_cache_alloc_node_trace() in include/linux/slab.h.
> - New patch: don't ignore kasan_kmalloc return value in
>   early_kmem_cache_node_alloc.
> - New patch: added __must_check annotations to KASAN hooks that assign
>   tags.
> - Changed KASAN clang version requirement to 7.0.0 (as we need rL329612).
> - Moved __no_sanitize_address definition from compiler_attributes.h to
>   compiler-gcc.h and compiler-clang.h.
>
> Changes in v9:
> - Fixed kasan_init_slab_obj() hook when KASAN is disabled.
> - Added assign_tag() function that preassigns tags for caches with
>   constructors.
> - Fixed KASAN_TAG_MASK redefinition in include/linux/mm.h vs
>   mm/kasan/kasan.h.
>
> Changes in v8:
> - Rebased onto 7876320f (4.19-rc4).
> - Renamed KHWASAN to software tag-based KASAN (see the top of the cover
>   letter for details).
> - Explicitly called tag-based KASAN a debug tool.
> - Reused kasan_init_slab_obj() callback to preassign tags to caches
>   without constructors, remove khwasan_preset_sl(u/a)b_tag().
> - Moved move obj_to_index to include/linux/slab_def.h from mm/slab.c.
> - Moved cache->s_mem untagging to alloc_slabmgmt() for SLAB.
> - Fixed check_memory_region() to correctly handle user memory accesses and
>   size == 0 case.
> - Merged __no_sanitize_hwaddress into __no_sanitize_address.
> - Defined KASAN_SET_TAG and KASAN_RESET_TAG macros for non KASAN builds to
>   avoid duplication of __kimg_to_phys, _virt_addr_is_linear and
>   page_to_virt macros.
> - Fixed and simplified find_first_bad_addr for generic KASAN.
> - Use non symbolized example KASAN report in documentation.
> - Mention clang version requirements for both KASAN modes in the Kconfig
>   options and in the documentation.
> - Various small fixes.
>
> Version v7 got accidentally skipped.
>
> Changes in v6:
> - Rebased onto 050cdc6c (4.19-rc1+).
> - Added notes regarding patchset testing into the cover letter.
>
> Changes in v5:
> - Rebased onto 1ffaddd029 (4.18-rc8).
> - Preassign tags for objects from caches with constructors and
>   SLAB_TYPESAFE_BY_RCU caches.
> - Fix SLAB allocator support by untagging page->s_mem in
>   kasan_poison_slab().
> - Performed dynamic testing to find potential places where pointer tagging
>   might result in bugs [1].
> - Clarified and fixed memory usage benchmarks in the cover letter.
> - Added a rationale for having KHWASAN to the cover letter.
>
> Changes in v4:
> - Fixed SPDX comment style in mm/kasan/kasan.h.
> - Fixed mm/kasan/kasan.h changes being included in a wrong patch.
> - Swapped "khwasan, arm64: fix up fault handling logic" and "khwasan: add
>   tag related helper functions" patches order.
> - Rebased onto 6f0d349d (4.18-rc2+).
>
> Changes in v3:
> - Minor documentation fixes.
> - Fixed CFLAGS variable name in KASAN makefile.
> - Added a "SPDX-License-Identifier: GPL-2.0" line to all source files
>   under mm/kasan.
> - Rebased onto 81e97f013 (4.18-rc1+).
>
> Changes in v2:
> - Changed kmalloc_large_node_hook to return tagged pointer instead of
>   using an output argument.
> - Fix checking whether -fsanitize=hwaddress is supported by the compiler.
> - Removed duplication of -fno-builtin for KASAN and KHWASAN.
> - Removed {} block for one line for_each_possible_cpu loop.
> - Made set_track() static inline as it is used only in common.c.
> - Moved optimal_redzone() to common.c.
> - Fixed using tagged pointer for shadow calculation in
>   kasan_unpoison_shadow().
> - Restored setting cache->align in kasan_cache_create(), which was
>   accidentally lost.
> - Simplified __kasan_slab_free(), kasan_alloc_pages() and kasan_kmalloc().
> - Removed tagging from kasan_kmalloc_large().
> - Added page_kasan_tag_reset() to kasan_poison_slab() and removed
>   !PageSlab() check from page_to_virt.
> - Reset pointer tag in _virt_addr_is_linear.
> - Set page tag for each page when multiple pages are allocated or freed.
> - Added a comment as to why we ignore cma allocated pages.
>
> Changes in v1:
> - Rebased onto 4.17-rc4.
> - Updated benchmarking stats.
> - Documented compiler version requirements, memory usage and slowdown.
> - Dropped kvm patches, as clang + arm64 + kvm is completely broken [1].
>
> Changes in RFC v3:
> - Renamed CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS to
>   CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW respectively.
> - Switch to -fsanitize=kernel-hwaddress instead of -fsanitize=hwaddress.
> - Removed unnecessary excessive shadow initialization.
> - Removed khwasan_enabled flag (it's not needed since KHWASAN is
>   initialized before any slab caches are used).
> - Split out kasan_report.c and khwasan_report.c from report.c.
> - Moved more common KASAN and KHWASAN functions to common.c.
> - Added tagging to pagealloc.
> - Rebased onto 4.17-rc1.
> - Temporarily dropped patch that adds kvm support (arm64 + kvm + clang
>   combo is broken right now [2]).
>
> Changes in RFC v2:
> - Removed explicit casts to u8 * for kasan_mem_to_shadow() calls.
> - Introduced KASAN_TCR_FLAGS for setting the TCR_TBI1 flag.
> - Added a comment regarding the non-atomic RMW sequence in
>   khwasan_random_tag().
> - Made all tag related functions accept const void *.
> - Untagged pointers in __kimg_to_phys, which is used by virt_to_phys.
> - Untagged pointers in show_ptr in fault handling logic.
> - Untagged pointers passed to KVM.
> - Added two reserved tag values: 0xFF and 0xFE.
> - Used the reserved tag 0xFF to disable validity checking (to resolve the
>   issue with pointer tag being lost after page_address + kmap usage).
> - Used the reserved tag 0xFE to mark redzones and freed objects.
> - Added mnemonics for esr manipulation in KHWASAN brk handler.
> - Added a comment about the -recover flag.
> - Some minor cleanups and fixes.
> - Rebased onto 3215b9d5 (4.16-rc6+).
> - Tested on real hardware (Odroid C2 board).
> - Added better benchmarks.
>
> [1] https://lkml.org/lkml/2018/7/18/765
> [2] https://lkml.org/lkml/2018/4/19/775
>
> Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
>
> Andrey Konovalov (24):
>   kasan, mm: change hooks signatures
>   kasan, slub: handle pointer tags in early_kmem_cache_node_alloc
>   kasan: move common generic and tag-based code to common.c
>   kasan: rename source files to reflect the new naming scheme
>   kasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_SW_TAGS
>   kasan, arm64: adjust shadow size for tag-based mode
>   kasan: rename kasan_zero_page to kasan_early_shadow_page
>   kasan: initialize shadow to 0xff for tag-based mode
>   arm64: move untagged_addr macro from uaccess.h to memory.h
>   kasan: add tag related helper functions
>   kasan, arm64: untag address in _virt_addr_is_linear
>   kasan: preassign tags to objects with ctors or SLAB_TYPESAFE_BY_RCU
>   kasan, arm64: fix up fault handling logic
>   kasan, arm64: enable top byte ignore for the kernel
>   kasan, mm: perform untagged pointers comparison in krealloc
>   kasan: split out generic_report.c from report.c
>   kasan: add bug reporting routines for tag-based mode
>   mm: move obj_to_index to include/linux/slab_def.h
>   kasan: add hooks implementation for tag-based mode
>   kasan, arm64: add brk handler for inline instrumentation
>   kasan, mm, arm64: tag non slab memory allocated via pagealloc
>   kasan: add __must_check annotations to kasan hooks
>   kasan: update documentation
>   kasan: add SPDX-License-Identifier mark to source files
>
>  Documentation/dev-tools/kasan.rst      | 232 +++++----
>  arch/arm64/Kconfig                     |   1 +
>  arch/arm64/Makefile                    |  11 +-
>  arch/arm64/include/asm/brk-imm.h       |   2 +
>  arch/arm64/include/asm/kasan.h         |   8 +-
>  arch/arm64/include/asm/memory.h        |  42 +-
>  arch/arm64/include/asm/pgtable-hwdef.h |   1 +
>  arch/arm64/include/asm/uaccess.h       |   7 -
>  arch/arm64/kernel/traps.c              |  68 ++-
>  arch/arm64/mm/fault.c                  |  31 +-
>  arch/arm64/mm/kasan_init.c             |  56 ++-
>  arch/arm64/mm/proc.S                   |   8 +-
>  arch/s390/mm/dump_pagetables.c         |  16 +-
>  arch/s390/mm/kasan_init.c              |  33 +-
>  arch/x86/mm/dump_pagetables.c          |  11 +-
>  arch/x86/mm/kasan_init_64.c            |  55 ++-
>  arch/xtensa/mm/kasan_init.c            |  18 +-
>  include/linux/compiler-clang.h         |   5 +-
>  include/linux/compiler-gcc.h           |   6 +
>  include/linux/compiler_attributes.h    |  13 -
>  include/linux/kasan.h                  | 101 +++-
>  include/linux/mm.h                     |  29 ++
>  include/linux/page-flags-layout.h      |  10 +
>  include/linux/slab.h                   |   4 +-
>  include/linux/slab_def.h               |  13 +
>  lib/Kconfig.kasan                      |  96 +++-
>  mm/cma.c                               |  11 +
>  mm/kasan/Makefile                      |  15 +-
>  mm/kasan/{kasan.c => common.c}         | 655 +++++++++----------------
>  mm/kasan/generic.c                     | 344 +++++++++++++
>  mm/kasan/generic_report.c              | 153 ++++++
>  mm/kasan/{kasan_init.c => init.c}      |  71 +--
>  mm/kasan/kasan.h                       |  59 ++-
>  mm/kasan/quarantine.c                  |   1 +
>  mm/kasan/report.c                      | 272 +++-------
>  mm/kasan/tags.c                        | 161 ++++++
>  mm/kasan/tags_report.c                 |  58 +++
>  mm/page_alloc.c                        |   1 +
>  mm/slab.c                              |  29 +-
>  mm/slab.h                              |   2 +-
>  mm/slab_common.c                       |   6 +-
>  mm/slub.c                              |  51 +-
>  scripts/Makefile.kasan                 |  53 +-
>  43 files changed, 1822 insertions(+), 997 deletions(-)
>  rename mm/kasan/{kasan.c => common.c} (59%)
>  create mode 100644 mm/kasan/generic.c
>  create mode 100644 mm/kasan/generic_report.c
>  rename mm/kasan/{kasan_init.c => init.c} (82%)
>  create mode 100644 mm/kasan/tags.c
>  create mode 100644 mm/kasan/tags_report.c
>
> --
> 2.19.1.1215.g8438c0b245-goog
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v11 00/24] kasan: add software tag-based mode for arm64
  2018-11-19 17:28 ` [PATCH v11 00/24] kasan: add software tag-based mode for arm64 Andrey Konovalov
@ 2018-11-19 17:32   ` Mark Rutland
  0 siblings, 0 replies; 6+ messages in thread
From: Mark Rutland @ 2018-11-19 17:32 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Catalin Marinas, Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn,
	Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov, Will Deacon,
	Christoph Lameter, Andrew Morton, Nick Desaulniers, Marc Zyngier

On Mon, Nov 19, 2018 at 06:28:57PM +0100, Andrey Konovalov wrote:
> On Mon, Nov 19, 2018 at 6:26 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
> > Changes in v11:
> > - Rebased onto 9ff01193 (4.20-rc3).
> > - Moved KASAN_SHADOW_SCALE_SHIFT definition to arch/arm64/Makefile.
> > - Added and used CC_HAS_KASAN_GENERIC and CC_HAS_KASAN_SW_TAGS configs to
> >   detect compiler support.
> > - New patch: "kasan: rename kasan_zero_page to kasan_early_shadow_page".
> > - New patch: "arm64: move untagged_addr macro from uaccess.h to memory.h".
> > - Renamed KASAN_SET_TAG/... macros in arch/arm64/include/asm/memory.h to
> >   __tag_set/... and reused them later in KASAN core code instead of
> >   redefining.
> > - Removed tag reset from the __kimg_to_phys() macro.
> > - Fixed tagged pointer handling in arm64 fault handling logic.
> 
> Hi Mark and Catalin,

Hi Andrey,

> I've addressed your comments, please take a look.

Catalin and I have just returned from Linux Plumbers and are ctaching up
with things. I do intend to look at this, but it may take me a short
while before I can.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v11 09/24] arm64: move untagged_addr macro from uaccess.h to memory.h
       [not found] ` <0288334225edc99d98d70c896494e19c3bd9361a.1542648335.git.andreyknvl@google.com>
@ 2018-11-23 17:37   ` Mark Rutland
  2018-11-27 16:04     ` Andrey Konovalov
  0 siblings, 1 reply; 6+ messages in thread
From: Mark Rutland @ 2018-11-23 17:37 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel,
	Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart <kste>

On Mon, Nov 19, 2018 at 06:26:25PM +0100, Andrey Konovalov wrote:
> Move the untagged_addr() macro from arch/arm64/include/asm/uaccess.h
> to arch/arm64/include/asm/memory.h to be later reused by KASAN.
> 
> Also make the untagged_addr() macro accept all kinds of address types
> (void *, unsigned long, etc.). This allows not to specify type casts in
> each place where the macro is used. This is done by using __typeof__.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  arch/arm64/include/asm/memory.h  | 8 ++++++++
>  arch/arm64/include/asm/uaccess.h | 7 -------
>  2 files changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 05fbc7ffcd31..deb95be44392 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -73,6 +73,14 @@
>  #define KERNEL_START      _text
>  #define KERNEL_END        _end
>  
> +/*
> + * When dealing with data aborts, watchpoints, or instruction traps we may end
> + * up with a tagged userland pointer. Clear the tag to get a sane pointer to
> + * pass on to access_ok(), for instance.
> + */
> +#define untagged_addr(addr)	\
> +	(__typeof__(addr))sign_extend64((__u64)(addr), 55)

Minor nits:

* s/__u64/u64/ (or s/__u64/unsigned long/), since this isn't a UAPI
  header.

* Please move this down into the #ifndef __ASSEMBLY__ block, after we
  include <linux/bitops.h>, which is necessary for sign_extend64().

With those fixed up, this patch looks sound to me:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

> +
>  /*
>   * Generic and tag-based KASAN require 1/8th and 1/16th of the kernel virtual
>   * address space for the shadow region respectively. They can bloat the stack
> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
> index 07c34087bd5e..281a1e47263d 100644
> --- a/arch/arm64/include/asm/uaccess.h
> +++ b/arch/arm64/include/asm/uaccess.h
> @@ -96,13 +96,6 @@ static inline unsigned long __range_ok(const void __user *addr, unsigned long si
>  	return ret;
>  }
>  
> -/*
> - * When dealing with data aborts, watchpoints, or instruction traps we may end
> - * up with a tagged userland pointer. Clear the tag to get a sane pointer to
> - * pass on to access_ok(), for instance.
> - */
> -#define untagged_addr(addr)		sign_extend64(addr, 55)
> -
>  #define access_ok(type, addr, size)	__range_ok(addr, size)
>  #define user_addr_max			get_fs
>  
> -- 
> 2.19.1.1215.g8438c0b245-goog
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v11 05/24] kasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_SW_TAGS
       [not found] ` <356c34c9a2ae8348a6cbd1de53135a28187fa120.1542648335.git.andreyknvl@google.com>
@ 2018-11-23 17:43   ` Mark Rutland
  2018-11-27 16:12     ` Andrey Konovalov
  0 siblings, 1 reply; 6+ messages in thread
From: Mark Rutland @ 2018-11-23 17:43 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel,
	Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart <kste>

On Mon, Nov 19, 2018 at 06:26:21PM +0100, Andrey Konovalov wrote:
> This commit splits the current CONFIG_KASAN config option into two:
> 1. CONFIG_KASAN_GENERIC, that enables the generic KASAN mode (the one
>    that exists now);
> 2. CONFIG_KASAN_SW_TAGS, that enables the software tag-based KASAN mode.
> 
> The name CONFIG_KASAN_SW_TAGS is chosen as in the future we will have
> another hardware tag-based KASAN mode, that will rely on hardware memory
> tagging support in arm64.
> 
> With CONFIG_KASAN_SW_TAGS enabled, compiler options are changed to
> instrument kernel files with -fsantize=kernel-hwaddress (except the ones
> for which KASAN_SANITIZE := n is set).
> 
> Both CONFIG_KASAN_GENERIC and CONFIG_KASAN_SW_TAGS support both
> CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.
> 
> This commit also adds empty placeholder (for now) implementation of
> tag-based KASAN specific hooks inserted by the compiler and adjusts
> common hooks implementation to compile correctly with each of the
> config options.
> 
> Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  arch/arm64/Kconfig                  |  1 +
>  include/linux/compiler-clang.h      |  5 +-
>  include/linux/compiler-gcc.h        |  6 ++
>  include/linux/compiler_attributes.h | 13 ----
>  include/linux/kasan.h               | 16 +++--
>  lib/Kconfig.kasan                   | 96 +++++++++++++++++++++++------
>  mm/kasan/Makefile                   |  6 +-
>  mm/kasan/generic.c                  |  2 +-
>  mm/kasan/kasan.h                    |  3 +-
>  mm/kasan/tags.c                     | 75 ++++++++++++++++++++++
>  mm/slub.c                           |  2 +-
>  scripts/Makefile.kasan              | 53 +++++++++-------
>  12 files changed, 216 insertions(+), 62 deletions(-)
>  create mode 100644 mm/kasan/tags.c
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 787d7850e064..8b331dcfb48e 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -111,6 +111,7 @@ config ARM64
>  	select HAVE_ARCH_JUMP_LABEL
>  	select HAVE_ARCH_JUMP_LABEL_RELATIVE
>  	select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
> +	select HAVE_ARCH_KASAN_SW_TAGS if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)

> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -1,35 +1,95 @@
> +# This config refers to the generic KASAN mode.
>  config HAVE_ARCH_KASAN
>  	bool
>  
> +config HAVE_ARCH_KASAN_SW_TAGS
> +	bool
> +
> +config CC_HAS_KASAN_GENERIC
> +	def_bool $(cc-option, -fsanitize=kernel-address)
> +
> +config CC_HAS_KASAN_SW_TAGS
> +	def_bool $(cc-option, -fsanitize=kernel-hwaddress)

> +if HAVE_ARCH_KASAN_SW_TAGS
> +
> +config KASAN_SW_TAGS
> +	bool "Software tag-based mode"
> +	depends on CC_HAS_KASAN_SW_TAGS
> +	depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB)
> +	select SLUB_DEBUG if SLUB
> +	select CONSTRUCTORS
> +	select STACKDEPOT
> +	help
> +	  Enables software tag-based KASAN mode.
> +	  This mode requires Top Byte Ignore support by the CPU and therefore
> +	  is only supported for arm64.
> +	  This mode requires Clang version 7.0.0 or later.
> +	  This mode consumes about 1/16th of available memory at kernel start
> +	  and introduces an overhead of ~20% for the rest of the allocations.
> +	  This mode may potentially introduce problems relating to pointer
> +	  casting and comparison, as it embeds tags into the top byte of each
> +	  pointer.
> +	  For better error detection enable CONFIG_STACKTRACE.
> +	  Currently CONFIG_KASAN_SW_TAGS doesn't work with CONFIG_DEBUG_SLAB
> +	  (the resulting kernel does not boot).
> +
> +endif

IIUC as of this patch a user can select KASAN_SW_TAGS...

> +ifdef CONFIG_KASAN_SW_TAGS
> +
> +ifdef CONFIG_KASAN_INLINE
> +    instrumentation_flags := -mllvm -hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)
> +else
> +    instrumentation_flags := -mllvm -hwasan-instrument-with-calls=1
> +endif
> +
> +CFLAGS_KASAN := -fsanitize=kernel-hwaddress \
> +		-mllvm -hwasan-instrument-stack=0 \
> +		$(instrumentation_flags)
> +
> +endif # CONFIG_KASAN_SW_TAGS

... and therefore we start using the compiler option, even though we
haven't introduced all of the necessary infrastructure yet.

That doesn't sound right to me. At the very least, that breaks
randconfig builds.

What we can do, in-order, is:

1) introduce the core refactoring, dependent on HAVE_ARCH_KASAN_SW_TAGS
2) instroduce the new infrastructure and arch code
3) select HAVE_ARCH_KASAN_SW_TAGS

... such that at (3), all KASAN configurations are known to work.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v11 09/24] arm64: move untagged_addr macro from uaccess.h to memory.h
  2018-11-23 17:37   ` [PATCH v11 09/24] arm64: move untagged_addr macro from uaccess.h to memory.h Mark Rutland
@ 2018-11-27 16:04     ` Andrey Konovalov
  0 siblings, 0 replies; 6+ messages in thread
From: Andrey Konovalov @ 2018-11-27 16:04 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel,
	Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart <kste>

On Fri, Nov 23, 2018 at 6:37 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Mon, Nov 19, 2018 at 06:26:25PM +0100, Andrey Konovalov wrote:
>> Move the untagged_addr() macro from arch/arm64/include/asm/uaccess.h
>> to arch/arm64/include/asm/memory.h to be later reused by KASAN.
>>
>> Also make the untagged_addr() macro accept all kinds of address types
>> (void *, unsigned long, etc.). This allows not to specify type casts in
>> each place where the macro is used. This is done by using __typeof__.
>>
>> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
>> ---
>>  arch/arm64/include/asm/memory.h  | 8 ++++++++
>>  arch/arm64/include/asm/uaccess.h | 7 -------
>>  2 files changed, 8 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 05fbc7ffcd31..deb95be44392 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -73,6 +73,14 @@
>>  #define KERNEL_START      _text
>>  #define KERNEL_END        _end
>>
>> +/*
>> + * When dealing with data aborts, watchpoints, or instruction traps we may end
>> + * up with a tagged userland pointer. Clear the tag to get a sane pointer to
>> + * pass on to access_ok(), for instance.
>> + */
>> +#define untagged_addr(addr)  \
>> +     (__typeof__(addr))sign_extend64((__u64)(addr), 55)
>
> Minor nits:
>
> * s/__u64/u64/ (or s/__u64/unsigned long/), since this isn't a UAPI
>   header.
>
> * Please move this down into the #ifndef __ASSEMBLY__ block, after we
>   include <linux/bitops.h>, which is necessary for sign_extend64().
>
> With those fixed up, this patch looks sound to me:
>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
>
> Thanks,
> Mark.

Will do in v12, thanks!

>
>> +
>>  /*
>>   * Generic and tag-based KASAN require 1/8th and 1/16th of the kernel virtual
>>   * address space for the shadow region respectively. They can bloat the stack
>> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
>> index 07c34087bd5e..281a1e47263d 100644
>> --- a/arch/arm64/include/asm/uaccess.h
>> +++ b/arch/arm64/include/asm/uaccess.h
>> @@ -96,13 +96,6 @@ static inline unsigned long __range_ok(const void __user *addr, unsigned long si
>>       return ret;
>>  }
>>
>> -/*
>> - * When dealing with data aborts, watchpoints, or instruction traps we may end
>> - * up with a tagged userland pointer. Clear the tag to get a sane pointer to
>> - * pass on to access_ok(), for instance.
>> - */
>> -#define untagged_addr(addr)          sign_extend64(addr, 55)
>> -
>>  #define access_ok(type, addr, size)  __range_ok(addr, size)
>>  #define user_addr_max                        get_fs
>>
>> --
>> 2.19.1.1215.g8438c0b245-goog
>>
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20181123173739.osgvnnhmptdgtlnl%40lakrids.cambridge.arm.com.
> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v11 05/24] kasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_SW_TAGS
  2018-11-23 17:43   ` [PATCH v11 05/24] kasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_SW_TAGS Mark Rutland
@ 2018-11-27 16:12     ` Andrey Konovalov
  0 siblings, 0 replies; 6+ messages in thread
From: Andrey Konovalov @ 2018-11-27 16:12 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel,
	Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart <kste>

On Fri, Nov 23, 2018 at 6:43 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Mon, Nov 19, 2018 at 06:26:21PM +0100, Andrey Konovalov wrote:
>> This commit splits the current CONFIG_KASAN config option into two:
>> 1. CONFIG_KASAN_GENERIC, that enables the generic KASAN mode (the one
>>    that exists now);
>> 2. CONFIG_KASAN_SW_TAGS, that enables the software tag-based KASAN mode.
>>
>> The name CONFIG_KASAN_SW_TAGS is chosen as in the future we will have
>> another hardware tag-based KASAN mode, that will rely on hardware memory
>> tagging support in arm64.
>>
>> With CONFIG_KASAN_SW_TAGS enabled, compiler options are changed to
>> instrument kernel files with -fsantize=kernel-hwaddress (except the ones
>> for which KASAN_SANITIZE := n is set).
>>
>> Both CONFIG_KASAN_GENERIC and CONFIG_KASAN_SW_TAGS support both
>> CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.
>>
>> This commit also adds empty placeholder (for now) implementation of
>> tag-based KASAN specific hooks inserted by the compiler and adjusts
>> common hooks implementation to compile correctly with each of the
>> config options.
>>
>> Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
>> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
>> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
>> ---
>>  arch/arm64/Kconfig                  |  1 +
>>  include/linux/compiler-clang.h      |  5 +-
>>  include/linux/compiler-gcc.h        |  6 ++
>>  include/linux/compiler_attributes.h | 13 ----
>>  include/linux/kasan.h               | 16 +++--
>>  lib/Kconfig.kasan                   | 96 +++++++++++++++++++++++------
>>  mm/kasan/Makefile                   |  6 +-
>>  mm/kasan/generic.c                  |  2 +-
>>  mm/kasan/kasan.h                    |  3 +-
>>  mm/kasan/tags.c                     | 75 ++++++++++++++++++++++
>>  mm/slub.c                           |  2 +-
>>  scripts/Makefile.kasan              | 53 +++++++++-------
>>  12 files changed, 216 insertions(+), 62 deletions(-)
>>  create mode 100644 mm/kasan/tags.c
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 787d7850e064..8b331dcfb48e 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -111,6 +111,7 @@ config ARM64
>>       select HAVE_ARCH_JUMP_LABEL
>>       select HAVE_ARCH_JUMP_LABEL_RELATIVE
>>       select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
>> +     select HAVE_ARCH_KASAN_SW_TAGS if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
>
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -1,35 +1,95 @@
>> +# This config refers to the generic KASAN mode.
>>  config HAVE_ARCH_KASAN
>>       bool
>>
>> +config HAVE_ARCH_KASAN_SW_TAGS
>> +     bool
>> +
>> +config CC_HAS_KASAN_GENERIC
>> +     def_bool $(cc-option, -fsanitize=kernel-address)
>> +
>> +config CC_HAS_KASAN_SW_TAGS
>> +     def_bool $(cc-option, -fsanitize=kernel-hwaddress)
>
>> +if HAVE_ARCH_KASAN_SW_TAGS
>> +
>> +config KASAN_SW_TAGS
>> +     bool "Software tag-based mode"
>> +     depends on CC_HAS_KASAN_SW_TAGS
>> +     depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB)
>> +     select SLUB_DEBUG if SLUB
>> +     select CONSTRUCTORS
>> +     select STACKDEPOT
>> +     help
>> +       Enables software tag-based KASAN mode.
>> +       This mode requires Top Byte Ignore support by the CPU and therefore
>> +       is only supported for arm64.
>> +       This mode requires Clang version 7.0.0 or later.
>> +       This mode consumes about 1/16th of available memory at kernel start
>> +       and introduces an overhead of ~20% for the rest of the allocations.
>> +       This mode may potentially introduce problems relating to pointer
>> +       casting and comparison, as it embeds tags into the top byte of each
>> +       pointer.
>> +       For better error detection enable CONFIG_STACKTRACE.
>> +       Currently CONFIG_KASAN_SW_TAGS doesn't work with CONFIG_DEBUG_SLAB
>> +       (the resulting kernel does not boot).
>> +
>> +endif
>
> IIUC as of this patch a user can select KASAN_SW_TAGS...
>
>> +ifdef CONFIG_KASAN_SW_TAGS
>> +
>> +ifdef CONFIG_KASAN_INLINE
>> +    instrumentation_flags := -mllvm -hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)
>> +else
>> +    instrumentation_flags := -mllvm -hwasan-instrument-with-calls=1
>> +endif
>> +
>> +CFLAGS_KASAN := -fsanitize=kernel-hwaddress \
>> +             -mllvm -hwasan-instrument-stack=0 \
>> +             $(instrumentation_flags)
>> +
>> +endif # CONFIG_KASAN_SW_TAGS
>
> ... and therefore we start using the compiler option, even though we
> haven't introduced all of the necessary infrastructure yet.
>
> That doesn't sound right to me. At the very least, that breaks
> randconfig builds.
>
> What we can do, in-order, is:
>
> 1) introduce the core refactoring, dependent on HAVE_ARCH_KASAN_SW_TAGS
> 2) instroduce the new infrastructure and arch code
> 3) select HAVE_ARCH_KASAN_SW_TAGS
>
> ... such that at (3), all KASAN configurations are known to work.
>
> Thanks,
> Mark.

Will do in v12, thanks!

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-11-27 16:12 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <cover.1542648335.git.andreyknvl@google.com>
2018-11-19 17:28 ` [PATCH v11 00/24] kasan: add software tag-based mode for arm64 Andrey Konovalov
2018-11-19 17:32   ` Mark Rutland
     [not found] ` <0288334225edc99d98d70c896494e19c3bd9361a.1542648335.git.andreyknvl@google.com>
2018-11-23 17:37   ` [PATCH v11 09/24] arm64: move untagged_addr macro from uaccess.h to memory.h Mark Rutland
2018-11-27 16:04     ` Andrey Konovalov
     [not found] ` <356c34c9a2ae8348a6cbd1de53135a28187fa120.1542648335.git.andreyknvl@google.com>
2018-11-23 17:43   ` [PATCH v11 05/24] kasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_SW_TAGS Mark Rutland
2018-11-27 16:12     ` Andrey Konovalov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).