From: "Daniel P. Berrangé" <berrange@redhat.com>
To: Eldon Stegall <eldon-qemu@eldondev.com>
Cc: Ben Dooks <qemu@ben.fluff.org>,
Peter Maydell <peter.maydell@linaro.org>,
QEMU Developers <qemu-devel@nongnu.org>
Subject: Re: out of CI pipeline minutes again
Date: Mon, 27 Feb 2023 16:59:19 +0000 [thread overview]
Message-ID: <Y/zhZ4brfdQ7nwLI@redhat.com> (raw)
In-Reply-To: <Y/fkf3Cya1NOopQA@invalid>
On Thu, Feb 23, 2023 at 10:11:11PM +0000, Eldon Stegall wrote:
> On Thu, Feb 23, 2023 at 03:33:00PM +0000, Daniel P. Berrangé wrote:
> > IIUC, we already have available compute resources from a couple of
> > sources we could put into service. The main issue is someone to
> > actually configure them to act as runners *and* maintain their
> > operation indefinitely going forward. The sysadmin problem is
> > what made/makes gitlab's shared runners so incredibly appealing.
>
> Hello,
>
> I would like to do this, but the path to contribute in this way isn't clear to
> me at this moment. I made it as far as creating a GitLab fork of QEMU, and then
> attempting to create a merge request from my branch in order to test the GitLab
> runner I have provisioned. Not having previously tried to contribute via
> GitLab, I was a bit stymied that it is not even possibly to create a merge
> request unless I am a member of the project? I clicked a button to request
> access.
>
> Alex's plan from last month sounds feasible:
>
> - provisioning scripts in scripts/ci/setup (if existing not already
> good enough)
> - tweak to handle multiple runner instances (or more -j on the build)
> - changes to .gitlab-ci.d/ so we can use those machines while keeping
> ability to run on shared runners for those outside the project
>
> Daniel, you pointed out the importance of reproducibility, and thus the
> use of the two-step process, build-docker, and then test-in-docker, so it
> seems that only docker and the gitlab agent would be strong requirements for
> running the jobs?
Almost our entire CI setup is built around use of docker and I don't
believe we really want to change that. Even ignoring GitLab, pretty
much all public CI services support use of docker containers for the
CI environment, so that's a defacto standard.
So while git gitlab runner agent can support many different execution
environments, I don't think we want to consider any except for the
ones that support containers (and that would need docker-in-docker
to be enabled too). Essentially we'll be using GitLab free CI credits
for most of the month. What we need is some extra private CI resource
that can pick up the the slack when we run out of free CI credits each
month. Thus the private CI resource needs to be compatible with the
public shared runners, by providing the same docker based environment[1].
It is a great shame that our current private runners ansible playbooks
were not configuring thue system for use with docker, as that would
have got us 90% of the way there already.
One thing to bear in mind is that a typical QEMU pipeline has 130 jobs
running.
Each gitlab shared runner is 1 vCPU, 3.75 GB of RAM, and we're using
as many as 60-70 of such instances at a time. A single physical
machine probably won't cope unless it is very big.
To avoid making the overall pipeline wallclock time too long, we need
to be able to handle a large number of parallel jobs at certain times.
We're quite peaky in our usage. Some days we merge nothing and so
consume no CI. Some days we may merge many PRs and so consumes lots
of CI. So buying lots of VMs to run 24x7 is quite wasteful. A burstab;le
container service is quite appealing
IIUC, GitLab's shared runners use GCP's "spot" instances which are
cheaper than regular instances. The downside is that the VM can get
killed/descheduled if something higher priority needs Google's
resources. Not too nice for reliabilty, but excellant for cost saving.
With regards,
Daniel
[1] there are still several ways to achieve this. A bare metal machine
with a local install of docker, or podman, vs pointing to a public
k8s instance that can run containers, and possibly other options
too.
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
next prev parent reply other threads:[~2023-02-27 17:00 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-23 12:56 out of CI pipeline minutes again Peter Maydell
2023-02-23 13:46 ` Thomas Huth
2023-02-23 14:14 ` Daniel P. Berrangé
2023-02-23 14:15 ` Warner Losh
2023-02-23 15:00 ` Daniel P. Berrangé
2023-02-23 15:28 ` Ben Dooks
2023-02-23 15:33 ` Daniel P. Berrangé
2023-02-23 22:11 ` Eldon Stegall
2023-02-24 9:16 ` Gerd Hoffmann
2023-02-24 14:07 ` Alex Bennée
2023-02-27 16:59 ` Daniel P. Berrangé [this message]
2023-02-27 17:43 ` Stefan Hajnoczi
2023-03-01 4:51 ` Eldon Stegall
2023-03-01 9:53 ` Alex Bennée
2023-03-21 16:40 ` Daniel P. Berrangé
2023-03-23 5:53 ` Eldon Stegall
2023-03-23 9:05 ` Alex Bennée
2023-03-23 9:18 ` Paolo Bonzini
2023-02-24 9:54 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y/zhZ4brfdQ7nwLI@redhat.com \
--to=berrange@redhat.com \
--cc=eldon-qemu@eldondev.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu@ben.fluff.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).