public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Michael Neuling <mikey@neuling.org>
To: Andi Kleen <andi@firstfloor.org>
Cc: linux-kernel@vger.kernel.org, torvalds@linux-foundation.org,
	akpm@linux-foundation.org, x86@kernel.org,
	Andi Kleen <ak@linux.intel.com>
Subject: Re: [PATCH 02/29] x86, tsx: Add RTM intrinsics
Date: Mon, 25 Mar 2013 14:40:34 +1100	[thread overview]
Message-ID: <30471.1364182834@ale.ozlabs.ibm.com> (raw)
In-Reply-To: <1364001923-10796-3-git-send-email-andi@firstfloor.org>

> From: Andi Kleen <ak@linux.intel.com>
> 
> This adds the basic RTM (Restricted Transactional Memory)
> intrinsics for TSX, implemented with alternative() so that they can be
> transparently used without checking CPUID first.
> 
> When the CPU does not support TSX we just always jump to the abort handler.
> 
> These intrinsics are only expected to be used by some low level code
> that presents higher level interface (like locks).
> 
> This is using the same interface as gcc and icc. There's a way to implement
> the intrinsics more efficiently with newer compilers that support asm goto,
> but to avoid undue dependencies on new tool chains this is not used here.
> 
> Also the current way looks slightly nicer, at the cost of only two more
> instructions.
> 
> Also don't require a TSX aware assembler -- all new instructions are implemented
> with .byte.
> 
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> ---
>  arch/x86/include/asm/rtm.h |   82 ++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 82 insertions(+), 0 deletions(-)
>  create mode 100644 arch/x86/include/asm/rtm.h
> 
> diff --git a/arch/x86/include/asm/rtm.h b/arch/x86/include/asm/rtm.h
> new file mode 100644
> index 0000000..7075a04
> --- /dev/null
> +++ b/arch/x86/include/asm/rtm.h
> @@ -0,0 +1,82 @@
> +#ifndef _RTM_OFFICIAL_H
> +#define _RTM_OFFICIAL_H 1
> +
> +#include <linux/compiler.h>
> +#include <asm/alternative.h>
> +#include <asm/cpufeature.h>
> +#include <asm/nops.h>
> +
> +/*
> + * RTM -- restricted transactional memory ISA
> + *
> + * Official RTM intrinsics interface matching gcc/icc, but works
> + * on older gcc compatible compilers and binutils.
> + *
> + * _xbegin() starts a transaction. When it returns a value different
> + * from _XBEGIN_STARTED a non transactional fallback path
> + * should be executed.
> + *
> + * This is a special kernel variant that supports binary patching.
> + * When the CPU does not support RTM we always jump to the abort handler.
> + * And _xtest() always returns 0.
> +
> + * This means these intrinsics can be used without checking cpu_has_rtm
> + * first.
> + *
> + * This is the low level interface mapping directly to the instructions.
> + * Usually kernel code will use a higher level abstraction instead (like locks)
> + *
> + * Note this can be implemented more efficiently on compilers that support
> + * "asm goto". But we don't want to require this right now.
> + */
> +
> +#define _XBEGIN_STARTED		(~0u)
> +#define _XABORT_EXPLICIT	(1 << 0)
> +#define _XABORT_RETRY		(1 << 1)
> +#define _XABORT_CONFLICT	(1 << 2)
> +#define _XABORT_CAPACITY	(1 << 3)
> +#define _XABORT_DEBUG		(1 << 4)
> +#define _XABORT_NESTED		(1 << 5)
> +#define _XABORT_CODE(x)		(((x) >> 24) & 0xff)
> +
> +#define _XABORT_SOFTWARE	-5	/* not part of ISA */
> +
> +static __always_inline int _xbegin(void)
> +{
> +	int ret;
> +	alternative_io("mov %[fallback],%[ret] ; " ASM_NOP6,
> +		       "mov %[started],%[ret] ; "
> +		       ".byte 0xc7,0xf8 ; .long 0 # XBEGIN 0",
> +		       X86_FEATURE_RTM,
> +		       [ret] "=a" (ret),
> +		       [fallback] "i" (_XABORT_SOFTWARE),
> +		       [started] "i" (_XBEGIN_STARTED) : "memory");
> +	return ret;
> +}

So ppc can do something like this.  Stealing from
Documentation/powerpc/transactional_memory.txt, ppc transactions looks
like this:

  tbegin
  beq   abort_handler

  ld    r4, SAVINGS_ACCT(r3)
  ld    r5, CURRENT_ACCT(r3)
  subi  r5, r5, 1
  addi  r4, r4, 1
  std   r4, SAVINGS_ACCT(r3)
  std   r5, CURRENT_ACCT(r3)

  tend

  b     continue

abort_handler:
  ... test for odd failures ...

  /* Retry the transaction if it failed because it conflicted with
   * someone else: */
  b     begin_move_money

The abort handler can then see the failure reason via an SPR/status
register TEXASR.  There are bits in there to specify faulure modes like:

  - software failure code (set in the kernel/hypervisor.  see
      arch/powerpc/include/asm/reg.h)
        #define TM_CAUSE_RESCHED        0xfe
        #define TM_CAUSE_TLBI           0xfc
        #define TM_CAUSE_FAC_UNAV       0xfa
        #define TM_CAUSE_SYSCALL        0xf9 /* Persistent */
        #define TM_CAUSE_MISC           0xf6
        #define TM_CAUSE_SIGNAL         0xf4
  - Failure persistent
  - Disallowed (like disallowed instruction)
  - Nested overflow
  - footprint overflow
  - self induced conflict
  - non-transaction conflict
  - transaction conflict
  - instruction fetch conflict
  - tabort instruction
  - falure while transaction was suspended 

Some of these overlap with the x86 but I think the fidelity could be
improved.

FYI the TM spec can be downloaded here:
  https://www.power.org/documentation/power-isa-transactional-memory/

You're example code looks like this:

static __init int rtm_test(void)
{
	unsigned status;

	pr_info("simple rtm test\n");
	if ((status = _xbegin()) == _XBEGIN_STARTED) {
		x++;
		_xend();
		pr_info("transaction committed\n");
	} else {
		pr_info("transaction aborted %x\n", status);
	}
	return 0;
}

Firstly, I think we can do something like this with the ppc mnemonics,
so I think the overall idea is ok with me. 

Secondly, can we make xbegin just return true/false and get the status
later if needed?

Something like (changing the 'x' names too)

	if (tmbegin()){
		x++;
		tmend();
		pr_info("transaction committed\n");
	} else {
		pr_info("transaction aborted %x\n", tmstatus());
	}
	return 0;

Looks cleaner to me.

> +
> +static __always_inline void _xend(void)
> +{
> +	/* Not patched because these should be not executed in fallback */
> +	asm volatile(".byte 0x0f,0x01,0xd5 # XEND" : : : "memory");
> +}
> +

ppc == tend... should be fine, other than the name.

> +static __always_inline void _xabort(const unsigned int status)
> +{
> +	alternative_input(ASM_NOP3,
> +			  ".byte 0xc6,0xf8,%P0 # XABORT",
> +			  X86_FEATURE_RTM,
> +			  "i" (status) : "memory");
> +}
> +

ppc == tabort... should be fine, other than the name.


> +static __always_inline int _xtest(void)
> +{
> +	unsigned char out;
> +	alternative_io("xor %0,%0 ; " ASM_NOP5,
> +		       ".byte 0x0f,0x01,0xd6 ; setnz %0 # XTEST",
> +		       X86_FEATURE_RTM,
> +		       "=r" (out),
> +		       "i" (0) : "memory");
> +	return out;
> +}
> +
> +#endif

ppc = tcheck... should be fine, other than the name.

Mikey

  reply	other threads:[~2013-03-25  3:40 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-23  1:24 RFC: Kernel lock elision for TSX Andi Kleen
2013-03-23  1:24 ` [PATCH 01/29] tsx: Add generic noop macros for RTM intrinsics Andi Kleen
2013-03-25  3:39   ` Michael Neuling
2013-03-25  8:19     ` Andi Kleen
2013-03-25  8:50       ` Michael Neuling
2013-03-23  1:24 ` [PATCH 02/29] x86, tsx: Add " Andi Kleen
2013-03-25  3:40   ` Michael Neuling [this message]
2013-03-25  8:15     ` Andi Kleen
2013-03-25  8:54       ` Michael Neuling
2013-03-25  9:32         ` Andi Kleen
2013-03-23  1:24 ` [PATCH 03/29] tsx: Add generic disable_txn macros Andi Kleen
2013-03-23  1:24 ` [PATCH 04/29] tsx: Add generic linux/elide.h macros Andi Kleen
2013-03-23  1:24 ` [PATCH 05/29] x86, tsx: Add a minimal RTM tester at bootup Andi Kleen
2013-03-23  1:25 ` [PATCH 06/29] checkpatch: Don't warn about if ((status = _xbegin()) == _XBEGIN_STARTED) Andi Kleen
2013-03-25  3:39   ` Michael Neuling
2013-03-23  1:25 ` [PATCH 07/29] x86, tsx: Don't abort immediately in __read/write_lock_failed Andi Kleen
2013-03-23  1:25 ` [PATCH 08/29] locking, tsx: Add support for arch_read/write_unlock_irq/flags Andi Kleen
2013-03-23  1:25 ` [PATCH 09/29] x86, xen: Support arch_spin_unlock_irq/flags Andi Kleen
2013-03-23  1:25 ` [PATCH 10/29] locking, tsx: Add support for arch_spin_unlock_irq/flags Andi Kleen
2013-03-23  1:25 ` [PATCH 11/29] x86, paravirt: Add support for arch_spin_unlock_flags/irq Andi Kleen
2013-03-23  1:25 ` [PATCH 12/29] x86, tsx: Add a per thread transaction disable count Andi Kleen
2013-03-23 11:51   ` Borislav Petkov
2013-03-23 13:51     ` Andi Kleen
2013-03-23 15:52       ` Borislav Petkov
2013-03-23 16:25         ` Borislav Petkov
2013-03-23 17:16         ` Linus Torvalds
2013-03-23 17:32           ` Borislav Petkov
2013-03-23 18:01           ` Andi Kleen
2013-03-23  1:25 ` [PATCH 13/29] params: Add a per cpu module param type Andi Kleen
2013-03-23  1:25 ` [PATCH 14/29] params: Add static key module param Andi Kleen
2013-03-23  1:25 ` [PATCH 15/29] x86, tsx: Add TSX lock elision infrastructure Andi Kleen
2013-03-23  1:25 ` [PATCH 16/29] locking, tsx: Allow architecture to control mutex fast path owner field Andi Kleen
2013-03-23  1:25 ` [PATCH 17/29] x86, tsx: Enable lock elision for mutexes Andi Kleen
2013-03-23  1:25 ` [PATCH 18/29] locking, tsx: Abort is mutex_is_locked() Andi Kleen
2013-03-23  1:25 ` [PATCH 19/29] x86, tsx: Add support for rwsem elision Andi Kleen
2013-03-23  1:25 ` [PATCH 20/29] x86, tsx: Enable elision for read write spinlocks Andi Kleen
2013-03-23  1:25 ` [PATCH 21/29] locking, tsx: Protect assert_spin_locked() with _xtest() Andi Kleen
2013-03-23  1:25 ` [PATCH 22/29] locking, tsx: Add a trace point for elision skipping Andi Kleen
2013-03-23  1:25 ` [PATCH 23/29] x86, tsx: Add generic per-lock adaptive lock elision support Andi Kleen
2013-03-23  1:25 ` [PATCH 24/29] x86, tsx: Use adaptive elision for mutexes Andi Kleen
2013-03-23  1:25 ` [PATCH 25/29] x86, tsx: Add adaption support for spinlocks Andi Kleen
2013-03-23  1:25 ` [PATCH 26/29] x86, tsx: Add adaptation support to rw spinlocks Andi Kleen
2013-03-23  1:25 ` [PATCH 27/29] locking, tsx: Add elision to bit spinlocks Andi Kleen
2013-03-23  1:25 ` [PATCH 28/29] x86, tsx: Add adaptive elision for rwsems Andi Kleen
2013-03-23  1:25 ` [PATCH 29/29] tsx: Add documentation for lock-elision Andi Kleen
2013-03-23 17:11 ` RFC: Kernel lock elision for TSX Linus Torvalds
2013-03-23 18:00   ` Andi Kleen
2013-03-23 18:02     ` Andi Kleen
2013-03-24 14:17     ` Benjamin Herrenschmidt
2013-03-25  0:59       ` Michael Neuling

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=30471.1364182834@ale.ozlabs.ibm.com \
    --to=mikey@neuling.org \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi@firstfloor.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox