From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bruce Richardson Subject: Re: A question about hugepage initialization time Date: Thu, 11 Dec 2014 10:14:49 +0000 Message-ID: <20141211101449.GB5668@bricha3-MOBL3> References: <20141209141032.5fa2db0d@urahara> <20141210103225.GA10056@bricha3-MOBL3> <20141210142926.GA17040@localhost.localdomain> <20141210143558.GB1632@bricha3-MOBL3> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Cc: "dev-VfR2kkLFssw@public.gmane.org" To: =?iso-8859-1?B?TOFzemzz?= Vadkerti Return-path: Content-Disposition: inline In-Reply-To: List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" On Wed, Dec 10, 2014 at 07:16:59PM +0000, L=E1szl=F3 Vadkerti wrote: > na ez :) >=20 > On Wed, 10 Dec 2014, Bruce Richardson wrote: >=20 > > On Wed, Dec 10, 2014 at 09:29:26AM -0500, Neil Horman wrote: > >> On Wed, Dec 10, 2014 at 10:32:25AM +0000, Bruce Richardson wrote: > >>> On Tue, Dec 09, 2014 at 02:10:32PM -0800, Stephen Hemminger wrote: > >>>> On Tue, 9 Dec 2014 11:45:07 -0800 > >>>> &rew wrote: > >>>> > >>>>>> Hey Folks, > >>>>>> > >>>>>> Our DPDK application deals with very large in memory data=20 > >>>>>> structures, and can potentially use tens or even hundreds of gig= abytes of hugepage memory. > >>>>>> During the course of development, we've noticed that as the=20 > >>>>>> number of huge pages increases, the memory initialization time=20 > >>>>>> during EAL init gets to be quite long, lasting several minutes a= t=20 > >>>>>> present. The growth in init time doesn't appear to be linear, w= hich is concerning. > >>>>>> > >>>>>> This is a minor inconvenience for us and our customers, as memor= y=20 > >>>>>> initialization makes our boot times a lot longer than it would=20 > >>>>>> otherwise be. Also, my experience has been that really long=20 > >>>>>> operations often are hiding errors - what you think is merely a=20 > >>>>>> slow operation is actually a timeout of some sort, often due to=20 > >>>>>> misconfiguration. This leads to two > >>>>>> questions: > >>>>>> > >>>>>> 1. Does the long initialization time suggest that there's an=20 > >>>>>> error happening under the covers? > >>>>>> 2. If not, is there any simple way that we can shorten memory=20 > >>>>>> initialization time? > >>>>>> > >>>>>> Thanks in advance for your insights. > >>>>>> > >>>>>> -- > >>>>>> Matt Laswell > >>>>>> laswell-bIuJOMs36aleGPcbtGPokg@public.gmane.org > >>>>>> infinite io, inc. > >>>>>> > >>>>> > >>>>> Hello, > >>>>> > >>>>> please find some quick comments on the questions: > >>>>> 1.) By our experience long initialization time is normal in case=20 > >>>>> of large amount of memory. However this time depends on some thin= gs: > >>>>> - number of hugepages (pagefault handled by kernel is pretty=20 > >>>>> expensive) > >>>>> - size of hugepages (memset at initialization) > >>>>> > >>>>> 2.) Using 1G pages instead of 2M will reduce the initialization=20 > >>>>> time significantly. Using wmemset instead of memset adds an=20 > >>>>> additional 20-30% boost by our measurements. Or, just by touching= =20 > >>>>> the pages but not cleaning them you can have still some more=20 > >>>>> speedup. But in this case your layer or the applications above=20 > >>>>> need to do the cleanup at allocation time (e.g. by using rte_zmal= loc). > >>>>> > >>>>> Cheers, > >>>>> &rew > >>>> > >>>> I wonder if the whole rte_malloc code is even worth it with a=20 > >>>> modern kernel with transparent huge pages? rte_malloc adds very=20 > >>>> little value and is less safe and slower than glibc or other=20 > >>>> allocators. Plus you lose the ablilty to get all the benefit out o= f valgrind or electric fence. > >>> > >>> While I'd dearly love to not have our own custom malloc lib to=20 > >>> maintain, for DPDK multiprocess, rte_malloc will be hard to replace= =20 > >>> as we would need a replacement solution that similarly guarantees=20 > >>> that memory mapped in process A is also available at the same=20 > >>> address in process B. :-( > >>> > >> Just out of curiosity, why even bother with multiprocess support? =20 > >> What you're talking about above is a multithread model, and your=20 > >> shoehorning multiple processes into it. > >> Neil > >> > > > > Yep, that's pretty much what it is alright. However, this multiproces= s=20 > > support is very widely used by our customers in building their=20 > > applications, and has been in place and supported since some of the=20 > > earliest DPDK releases. If it is to be removed, it needs to be=20 > > replaced by something that provides equivalent capabilities to=20 > > application writers (perhaps something with more fine-grained sharing= =20 > > etc.) > > > > /Bruce > > >=20 > It is probably time to start discussing how to pull in our multi proces= s and > memory management improvements we were talking about in our > DPDK Summit presentation: > https://www.youtube.com/watch?v=3D907VShi799k#t=3D647 >=20 > Multi-process model could have several benefits mostly in the high avai= lability > area (telco requirement) due to better separation, controlling permissi= ons > (per process RO or RW page mappings), single process restartability, im= proved > startup and core dumping time etc. >=20 > As a summary of our memory management additions, it allows an applicati= on > to describe their memory model in a configuration (or via an API), > e.g. a simplified config would say that every instance will need 4GB pr= ivate > memory and 2GB shared memory. In a multi process model this will result > mapping only 6GB memory in each process instead of the current DPDK mod= el > where the 4GB per process private memory is mapped into all other proce= sses > resulting in unnecessary mappings, e.g. 16x4GB + 2GB in every processes= . >=20 > What we've chosen is to use DPDK's NUMA aware allocator for this purpos= e, > e.g. the above example for 16 instances will result allocating > 17 DPDK NUMA sockets (1 default shared + 16 private) and we can selecti= vely > map a given "NUMA socket" (set of memsegs) into a process. > This also opens many other possibilities to play with, e.g. > - clearing of the full private memory if a process dies including memz= ones on it > - pop-up memory support > etc. etc. >=20 > Other option could be to use page aligned memzones and control the > mapping/permissions on a memzone level. >=20 > /Laszlo Those enhancements sound really, really good. Do you have code for these = that you can share that we can start looking at with a view to pulling it in? /Bruce