From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755906Ab0GIXkK (ORCPT ); Fri, 9 Jul 2010 19:40:10 -0400 Received: from smtp.polymtl.ca ([132.207.4.11]:59632 "EHLO smtp.polymtl.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753779Ab0GIXhW (ORCPT ); Fri, 9 Jul 2010 19:37:22 -0400 Message-Id: <20100709225816.884268032@efficios.com> User-Agent: quilt/0.48-1 Date: Fri, 09 Jul 2010 18:57:36 -0400 From: Mathieu Desnoyers To: Steven Rostedt , LKML Cc: Linus Torvalds , Andrew Morton , Peter Zijlstra , Ingo Molnar , Frederic Weisbecker , Thomas Gleixner , Christoph Hellwig , Mathieu Desnoyers , Li Zefan , Lai Jiangshan , Johannes Berg , Masami Hiramatsu , Arnaldo Carvalho de Melo , Tom Zanussi , KOSAKI Motohiro , Andi Kleen Subject: [patch 09/20] x86 inline memcpy References: <20100709225727.312232266@efficios.com> Content-Disposition: inline; filename=x86-inline-memcpy.patch X-Poly-FromMTA: (test.casi.polymtl.ca [132.207.72.60]) at Fri, 9 Jul 2010 22:58:17 +0000 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Export an inline_memcpy() API. Useful when the memcpy size is unknown but the caller cannot afford the cost of a function call. Useful for very frequent memcpy callers. Signed-off-by: Mathieu Desnoyers --- arch/x86/include/asm/string_32.h | 7 +++++++ arch/x86/include/asm/string_64.h | 7 +++++++ 2 files changed, 14 insertions(+) Index: linux.trees.git/arch/x86/include/asm/string_32.h =================================================================== --- linux.trees.git.orig/arch/x86/include/asm/string_32.h 2010-06-21 17:50:21.000000000 -0400 +++ linux.trees.git/arch/x86/include/asm/string_32.h 2010-06-21 17:56:49.000000000 -0400 @@ -44,6 +44,13 @@ static __always_inline void *__memcpy(vo return to; } +#define __HAVE_ARCH_INLINE_MEMCPY +static __always_inline void *inline_memcpy(void *to, const void *from, + size_t n) +{ + return __memcpy(to, from, n); +} + /* * This looks ugly, but the compiler can optimize it totally, * as the count is constant. Index: linux.trees.git/arch/x86/include/asm/string_64.h =================================================================== --- linux.trees.git.orig/arch/x86/include/asm/string_64.h 2010-06-21 17:50:21.000000000 -0400 +++ linux.trees.git/arch/x86/include/asm/string_64.h 2010-06-21 17:57:13.000000000 -0400 @@ -23,6 +23,13 @@ static __always_inline void *__inline_me return to; } +#define __HAVE_ARCH_INLINE_MEMCPY +static __always_inline void *inline_memcpy(void *to, const void *from, + size_t n) +{ + return __inline_memcpy(to, from, n); +} + /* Even with __builtin_ the compiler may decide to use the out of line function. */