Openembedded Core Discussions
 help / color / mirror / Atom feed
From: Ulf Samuelsson <ulf@emagii.com>
To: Martin Jansa <martin.jansa@gmail.com>
Cc: openembedded-core@lists.openembedded.org
Subject: Re: Improving Build Speed
Date: Thu, 21 Nov 2013 08:15:08 +0100	[thread overview]
Message-ID: <528DB2FC.90205@emagii.com> (raw)
In-Reply-To: <20131121001922.GW3708@jama>

2013-11-21 01:19, Martin Jansa skrev:
> On Wed, Nov 20, 2013 at 11:43:13PM +0100, Ulf Samuelsson wrote:
>> 2013-11-20 22:29, Richard Purdie skrev:
>> Another idea:
>>
>> I suspect that there is a lot of unpacking and patching of recipes
>> for the target when the native stuff is built.
>> Does it make sense to have multiple threads reading the disk, for
>> the target recipes during the native build or will we just lose out
>> due to seek time?
>>
>> Having multiple threads accessing the disk, might force the disk to spend
>> most of its time seeking.
>> Found an application which measures seek time performance,
>> and my WD Black will do 83 seeks per second, and my SAS disk will do
>> twice that.
>> The RAID of two SAS disks will provide close to SSD throughput (380 MB/s)
>> but seek time is no better than a single SAS disk.
>>
>> Since there is "empty time" at the end of the native build, does it make
>> sense
>> to minimize unpack/patch of target stuff when we reach that point, and
>> then we let loose?
> In my benchmarks increasing PARALLEL_MAKE till number of cores was
> significantly improving build time, but BB_NUMBER_THREADS had minimal
> influence somewhere above 6 or 8 (tested on various systems, even only 4 was
> optimum on my older RAID-0 and 2 on single disk).
> Of course it was quite different for clean build without sstate
> prepopulated and build where most of the stuff was reused from sstate.
>
> see http://wiki.webos-ports.org/wiki/OE_benchmark

How many cores do you have in your build machine?
I started a build, and after 20 minutes it had completed 1500 tasks using:

PARALLEL_MAKE     = "-j24"
BB_NUMBER_THREADS =   "6"

The I decided to kill it.

When I did
PARALLEL_MAKE     = "-j12"
BB_NUMBER_THREADS =   "24"

It completed 2000 tasks in less than half the time.

This does not use tmpfs though.
Do you have any comparision between tmpfs builds and RAID builds?

I currently do not use INHERIT += "rm_work"
since I want to be able to do changes on some packages.
Is there a way to defined rm_work on a package basis?
Then the majority of the packages can be removed.

I use 75 GB without "rm_work"


BR
Ulf
>
>> ========================
>>
>> Now with 48 MB of RAM, (which I might grow to 96 GB, if someone proves that
>> this makes it faster), this might be useful to speed things up.
>>
>> Can tmpfs beat the kernel cache system?
>>
>> 1.    Typically, I work on less than 10 recipes, and if I continuosly
>>           rebuild those, why not create the build directories as links to
>> a tmpfs file system.
>>           Maybe a configuration file with a list of recipes to build on
>> tmpfs.
>>
>>           During a build from scratch, this is not so useful, but once
>> most stuff is in place, it might,
>>
>> 2.     If the downloads directory was shadowed in a tmpfs system
>>           then there would be less seek time during the build.
>>           The downloads tmpfs should be poplulated at boot time,
>>           and rsynced with a real disk in the background when new stuff
>>           is downloaded from internet.
>>
>> 3.     With 96 GB of RAM, maybe the complete build directory will fit.
>>           Would be nice to build everything on tmpfs, and automatically rsync
>>           to a real disk when there is nothing else to do...
>>
>> 4.     If not tmpfs is used, then It would still be good to have better
>> control
>>           over the build directory.
>>           It make sense to me to have the metadata on an SSD, but the
>>           build directory should be on my RAID cluster for fast rebuilds.
>>           I can set this up manually, but it would be better to be able to
>>           specify this in a configuration file.
>>
> See
> http://www.mail-archive.com/yocto@yoctoproject.org/msg14879.html
>


-- 
Best Regards
Ulf Samuelsson
ulf@emagii.com
+46 722 427437



  reply	other threads:[~2013-11-21  7:15 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-20 21:05 Improving Build Speed Ulf Samuelsson
2013-11-20 21:29 ` Richard Purdie
2013-11-20 22:43   ` Ulf Samuelsson
2013-11-21  0:19     ` Martin Jansa
2013-11-21  7:15       ` Ulf Samuelsson [this message]
2013-11-21 12:53         ` Martin Jansa
2013-11-23 18:39         ` Nicolas Dechesne
2013-11-21  0:10   ` Martin Jansa
2013-11-21  8:04   ` Ulf Samuelsson
2013-11-21 13:53     ` Richard Purdie
2013-11-23 15:06       ` Ulf Samuelsson
2013-11-21 10:05 ` Burton, Ross
2013-11-21 11:51 ` Enrico Scholz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=528DB2FC.90205@emagii.com \
    --to=ulf@emagii.com \
    --cc=martin.jansa@gmail.com \
    --cc=openembedded-core@lists.openembedded.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox