From: "Daniel P. Berrangé" <berrange@redhat.com>
To: Erik Skultety <eskultet@redhat.com>
Cc: "Philippe Mathieu-Daudé" <philmd@linaro.org>, qemu-devel@nongnu.org
Subject: Re: [PATCH 2/3] tests/vm: Introduce get_qemu_packages_from_lcitool_vars() helper
Date: Thu, 1 Jun 2023 10:57:27 +0100 [thread overview]
Message-ID: <ZHhrh1tJjvC3xF62@redhat.com> (raw)
In-Reply-To: <ZHhKe3fzmau7qsMn@ridgehead.home.lan>
On Thu, Jun 01, 2023 at 09:36:27AM +0200, Erik Skultety wrote:
> On Wed, May 31, 2023 at 10:09:05PM +0200, Philippe Mathieu-Daudé wrote:
> > The 'lcitool variables $OS qemu' command produces a file containing
> > consistent environment variables helpful to build QEMU on $OS.
> > In particular the $PKGS variable contains the packages required to
> > build QEMU.
> >
> > Since some of these files are committed in the repository (see
> > 0e103a65ba "gitlab: support for FreeBSD 12, 13 and macOS 11 via
> > cirrus-run"), we can parse these files to get the package list
> > required to build a VM.
> >
> > Add the get_qemu_packages_from_lcitool_vars() helper which return
> > such package list from a lcitool env var file.
> >
> > Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> > ---
> > tests/vm/basevm.py | 6 ++++++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/tests/vm/basevm.py b/tests/vm/basevm.py
> > index 8ec021ddcf..2632c3d41a 100644
> > --- a/tests/vm/basevm.py
> > +++ b/tests/vm/basevm.py
> > @@ -522,6 +522,12 @@ def get_qemu_version(qemu_path):
> > version_num = re.split(' |\(', version_line)[3].split('.')[0]
> > return int(version_num)
> >
> > +def get_qemu_packages_from_lcitool_vars(vars_path):
> > + """Parse a lcitool variables file and return the PKGS list."""
> > + with open(vars_path, 'r') as fd:
> > + line = list(filter(lambda y: y.startswith('PKGS'), fd.readlines()))[0]
> > + return line.split("'")[1].split()
>
> Nothing wrong with this one, it's also less lines of code, but just an FYI in
> case you wanted a slightly more readable (yet a tiny bit less performant piece
> of code) you could make use of the JSON format with 'variables --format json'.
Specifically we could do
with open(vars_path, 'r') as fh:
return json.load(fh)['pkgs']
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
next prev parent reply other threads:[~2023-06-01 9:57 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20230531200906.17790-1-philmd@linaro.org>
[not found] ` <20230531200906.17790-3-philmd@linaro.org>
2023-06-01 7:36 ` [PATCH 2/3] tests/vm: Introduce get_qemu_packages_from_lcitool_vars() helper Erik Skultety
2023-06-01 9:57 ` Daniel P. Berrangé [this message]
2023-06-01 8:05 ` [PATCH 0/3] tests/vm/freebsd: Get up-to-date package list from lcitool Erik Skultety
[not found] ` <20230531200906.17790-4-philmd@linaro.org>
2023-06-01 9:55 ` [PATCH 3/3] tests/vm/freebsd: Get up-to-date package list from lcitool vars file Daniel P. Berrangé
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZHhrh1tJjvC3xF62@redhat.com \
--to=berrange@redhat.com \
--cc=eskultet@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).