From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bruce Richardson Subject: Re: some questions about rte_memcpy Date: Thu, 22 Jan 2015 11:34:27 +0000 Message-ID: <20150122113426.GC4580@bricha3-MOBL3> References: <54C070DF.1050006@huawei.com> <20150122044531.GA13230@mhcomputing.net> <54C08B54.50700@huawei.com> <20150122073526.GA14800@mhcomputing.net> <54C0CFB5.909@igel.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: "dev-VfR2kkLFssw@public.gmane.org" To: Tetsuya Mukawa Return-path: Content-Disposition: inline In-Reply-To: <54C0CFB5.909-AlSX/UN32fvPDbFq/vQRIQ@public.gmane.org> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" On Thu, Jan 22, 2015 at 07:23:49PM +0900, Tetsuya Mukawa wrote: > On 2015/01/22 16:35, Matthew Hall wrote: > > On Thu, Jan 22, 2015 at 01:32:04PM +0800, Linhaifeng wrote: > >> Do you mean if call rte_memcpy before rte_eal_init() would crash?why? > > No guarantee. But a theory. It might use some things from the EAL init to > > figure out which version of the accelerated algorithm to use. > > This selection is done at compile-time. > And if the size is constant, I guess DPDK assumes memcpy is replaced by > inline __builtin_memcpy. > I haven't checked the performance of builtin memcpy, but probably much > faster. > Yes, that assumption is correct. A couple of years ago we discovered that for constant size values, the compiler would generate much faster code for us using a regular memcpy than rte_memcpy, hence the macro. /Bruce > Tetsuya > > > Matthew. > >