From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751957Ab0HQX0q (ORCPT ); Tue, 17 Aug 2010 19:26:46 -0400 Received: from smtp.polymtl.ca ([132.207.4.11]:33650 "EHLO smtp.polymtl.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752017Ab0HQXX1 (ORCPT ); Tue, 17 Aug 2010 19:23:27 -0400 Message-Id: <20100817232152.354693489@efficios.com> User-Agent: quilt/0.48-1 Date: Tue, 17 Aug 2010 19:16:28 -0400 From: Mathieu Desnoyers To: LKML Cc: ltt-dev@lists.casi.polymtl.ca, Linus Torvalds , Andrew Morton , Ingo Molnar , Peter Zijlstra , Steven Rostedt , Frederic Weisbecker , Thomas Gleixner , Christoph Hellwig , Mathieu Desnoyers , Li Zefan , Lai Jiangshan , Johannes Berg , Masami Hiramatsu , Arnaldo Carvalho de Melo , Tom Zanussi , KOSAKI Motohiro , Andi Kleen Subject: [RFC PATCH 09/20] x86: inline memcpy References: <20100817231619.277457797@efficios.com> Content-Disposition: inline; filename=x86-inline-memcpy.patch X-Poly-FromMTA: (test.casi.polymtl.ca [132.207.72.60]) at Tue, 17 Aug 2010 23:21:52 +0000 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Export an inline_memcpy() API. Useful when the memcpy size is unknown but the caller cannot afford the cost of a function call. Useful for very frequent memcpy callers. Signed-off-by: Mathieu Desnoyers --- arch/x86/include/asm/string_32.h | 7 +++++++ arch/x86/include/asm/string_64.h | 7 +++++++ 2 files changed, 14 insertions(+) Index: linux.trees.git/arch/x86/include/asm/string_32.h =================================================================== --- linux.trees.git.orig/arch/x86/include/asm/string_32.h 2010-06-21 17:50:21.000000000 -0400 +++ linux.trees.git/arch/x86/include/asm/string_32.h 2010-06-21 17:56:49.000000000 -0400 @@ -44,6 +44,13 @@ static __always_inline void *__memcpy(vo return to; } +#define __HAVE_ARCH_INLINE_MEMCPY +static __always_inline void *inline_memcpy(void *to, const void *from, + size_t n) +{ + return __memcpy(to, from, n); +} + /* * This looks ugly, but the compiler can optimize it totally, * as the count is constant. Index: linux.trees.git/arch/x86/include/asm/string_64.h =================================================================== --- linux.trees.git.orig/arch/x86/include/asm/string_64.h 2010-06-21 17:50:21.000000000 -0400 +++ linux.trees.git/arch/x86/include/asm/string_64.h 2010-06-21 17:57:13.000000000 -0400 @@ -23,6 +23,13 @@ static __always_inline void *__inline_me return to; } +#define __HAVE_ARCH_INLINE_MEMCPY +static __always_inline void *inline_memcpy(void *to, const void *from, + size_t n) +{ + return __inline_memcpy(to, from, n); +} + /* Even with __builtin_ the compiler may decide to use the out of line function. */