From: "Philippe Mathieu-Daudé" <philmd@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>, qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH v2 6/6] scripts/coverity-scan: Add Docker support
Date: Tue, 14 Apr 2020 13:58:11 +0200 [thread overview]
Message-ID: <5012c7e4-c1ec-79e7-ac0a-f15e2eb1fd6e@redhat.com> (raw)
In-Reply-To: <20200319193323.2038-7-peter.maydell@linaro.org>
On 3/19/20 8:33 PM, Peter Maydell wrote:
> Add support for running the Coverity Scan tools inside a Docker
> container rather than directly on the host system.
>
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> ---
> v1->v2:
> * various bug fixes
> * added --src-tarball rather than putting the whole source
> tree in the 'secrets' directory
> * docker file package list updated
> ---
> scripts/coverity-scan/coverity-scan.docker | 131 +++++++++++++++++++++
> scripts/coverity-scan/run-coverity-scan | 90 ++++++++++++++
> 2 files changed, 221 insertions(+)
> create mode 100644 scripts/coverity-scan/coverity-scan.docker
>
> diff --git a/scripts/coverity-scan/coverity-scan.docker b/scripts/coverity-scan/coverity-scan.docker
> new file mode 100644
> index 00000000000..a4f64d12834
> --- /dev/null
> +++ b/scripts/coverity-scan/coverity-scan.docker
> @@ -0,0 +1,131 @@
> +# syntax=docker/dockerfile:1.0.0-experimental
> +#
> +# Docker setup for running the "Coverity Scan" tools over the source
> +# tree and uploading them to the website, as per
> +# https://scan.coverity.com/projects/qemu/builds/new
> +# We do this on a fixed config (currently Fedora 30 with a known
> +# set of dependencies and a configure command that enables a specific
> +# set of options) so that random changes don't result in our accidentally
> +# dropping some files from the scan.
> +#
> +# We don't build on top of the fedora.docker file because we don't
> +# want to accidentally change or break the scan config when that
> +# is updated.
> +
> +# The work of actually doing the build is handled by the
> +# run-coverity-scan script.
> +
> +FROM fedora:30
> +ENV PACKAGES \
> + alsa-lib-devel \
> + bc \
> + bison \
> + brlapi-devel \
> + bzip2 \
> + bzip2-devel \
> + ccache \
> + clang \
> + curl \
> + cyrus-sasl-devel \
> + dbus-daemon \
> + device-mapper-multipath-devel \
> + findutils \
> + flex \
> + gcc \
> + gcc-c++ \
> + gettext \
> + git \
> + glib2-devel \
> + glusterfs-api-devel \
> + gnutls-devel \
> + gtk3-devel \
> + hostname \
> + libaio-devel \
> + libasan \
> + libattr-devel \
> + libblockdev-mpath-devel \
> + libcap-devel \
> + libcap-ng-devel \
> + libcurl-devel \
> + libepoxy-devel \
> + libfdt-devel \
> + libgbm-devel \
> + libiscsi-devel \
> + libjpeg-devel \
> + libpmem-devel \
> + libnfs-devel \
> + libpng-devel \
> + librbd-devel \
> + libseccomp-devel \
> + libssh-devel \
> + libubsan \
> + libudev-devel \
> + libusbx-devel \
> + libxml2-devel \
> + libzstd-devel \
> + llvm \
> + lzo-devel \
> + make \
> + mingw32-bzip2 \
> + mingw32-curl \
> + mingw32-glib2 \
> + mingw32-gmp \
> + mingw32-gnutls \
> + mingw32-gtk3 \
> + mingw32-libjpeg-turbo \
> + mingw32-libpng \
> + mingw32-libtasn1 \
> + mingw32-nettle \
> + mingw32-nsis \
> + mingw32-pixman \
> + mingw32-pkg-config \
> + mingw32-SDL2 \
> + mingw64-bzip2 \
> + mingw64-curl \
> + mingw64-glib2 \
> + mingw64-gmp \
> + mingw64-gnutls \
> + mingw64-gtk3 \
> + mingw64-libjpeg-turbo \
> + mingw64-libpng \
> + mingw64-libtasn1 \
> + mingw64-nettle \
> + mingw64-pixman \
> + mingw64-pkg-config \
> + mingw64-SDL2 \
> + ncurses-devel \
> + nettle-devel \
> + nss-devel \
> + numactl-devel \
> + perl \
> + perl-Test-Harness \
> + pixman-devel \
> + pulseaudio-libs-devel \
> + python3 \
> + python3-sphinx \
> + PyYAML \
> + rdma-core-devel \
> + SDL2-devel \
> + snappy-devel \
> + sparse \
> + spice-server-devel \
> + systemd-devel \
> + systemtap-sdt-devel \
> + tar \
> + texinfo \
> + usbredir-devel \
> + virglrenderer-devel \
> + vte291-devel \
> + wget \
> + which \
> + xen-devel \
> + xfsprogs-devel \
> + zlib-devel
> +ENV QEMU_CONFIGURE_OPTS --python=/usr/bin/python3
> +
> +RUN dnf install -y $PACKAGES
> +RUN rpm -q $PACKAGES | sort > /packages.txt
> +ENV PATH $PATH:/usr/libexec/python3-sphinx/
> +ENV COVERITY_TOOL_BASE=/coverity-tools
> +COPY run-coverity-scan run-coverity-scan
> +RUN --mount=type=secret,id=coverity.token,required ./run-coverity-scan --update-tools-only --tokenfile /run/secrets/coverity.token
> diff --git a/scripts/coverity-scan/run-coverity-scan b/scripts/coverity-scan/run-coverity-scan
> index d40b51969fa..2e067ef5cfc 100755
> --- a/scripts/coverity-scan/run-coverity-scan
> +++ b/scripts/coverity-scan/run-coverity-scan
> @@ -29,6 +29,7 @@
>
> # Command line options:
> # --dry-run : run the tools, but don't actually do the upload
> +# --docker : create and work inside a docker container
> # --update-tools-only : update the cached copy of the tools, but don't run them
> # --tokenfile : file to read Coverity token from
> # --version ver : specify version being analyzed (default: ask git)
> @@ -36,6 +37,8 @@
> # --srcdir : QEMU source tree to analyze (default: current working dir)
> # --results-tarball : path to copy the results tarball to (default: don't
> # copy it anywhere, just upload it)
> +# --src-tarball : tarball to untar into src dir (default: none); this
> +# is intended mainly for internal use by the Docker support
> #
> # User-specifiable environment variables:
> # COVERITY_TOKEN -- Coverity token
> @@ -125,6 +128,7 @@ update_coverity_tools () {
> # Check user-provided environment variables and arguments
> DRYRUN=no
> UPDATE_ONLY=no
> +DOCKER=no
>
> while [ "$#" -ge 1 ]; do
> case "$1" in
> @@ -181,6 +185,19 @@ while [ "$#" -ge 1 ]; do
> RESULTSTARBALL="$1"
> shift
> ;;
> + --src-tarball)
> + shift
> + if [ $# -eq 0 ]; then
> + echo "--src-tarball needs an argument"
> + exit 1
> + fi
> + SRCTARBALL="$1"
> + shift
> + ;;
> + --docker)
> + DOCKER=yes
> + shift
> + ;;
> *)
> echo "Unexpected argument '$1'"
> exit 1
> @@ -212,6 +229,10 @@ PROJTOKEN="$COVERITY_TOKEN"
> PROJNAME=QEMU
> TARBALL=cov-int.tar.xz
>
> +if [ "$UPDATE_ONLY" = yes ] && [ "$DOCKER" = yes ]; then
> + echo "Combining --docker and --update-only is not supported"
> + exit 1
> +fi
>
> if [ "$UPDATE_ONLY" = yes ]; then
> # Just do the tools update; we don't need to check whether
> @@ -221,8 +242,17 @@ if [ "$UPDATE_ONLY" = yes ]; then
> exit 0
> fi
>
> +if [ ! -e "$SRCDIR" ]; then
> + mkdir "$SRCDIR"
> +fi
> +
> cd "$SRCDIR"
>
> +if [ ! -z "$SRCTARBALL" ]; then
> + echo "Untarring source tarball into $SRCDIR..."
> + tar xvf "$SRCTARBALL"
> +fi
> +
> echo "Checking this is a QEMU source tree..."
> if ! [ -e "$SRCDIR/VERSION" ]; then
> echo "Not in a QEMU source tree?"
> @@ -242,6 +272,66 @@ if [ -z "$COVERITY_EMAIL" ]; then
> COVERITY_EMAIL="$(git config user.email)"
> fi
>
> +# Run ourselves inside docker if that's what the user wants
> +if [ "$DOCKER" = yes ]; then
> + # build docker container including the coverity-scan tools
> + # Put the Coverity token into a temporary file that only
> + # we have read access to, and then pass it to docker build
> + # using --secret. This requires at least Docker 18.09.
> + # Mostly what we are trying to do here is ensure we don't leak
> + # the token into the Docker image.
> + umask 077
> + SECRETDIR=$(mktemp -d)
> + if [ -z "$SECRETDIR" ]; then
> + echo "Failed to create temporary directory"
> + exit 1
> + fi
> + trap 'rm -rf "$SECRETDIR"' INT TERM EXIT
> + echo "Created temporary directory $SECRETDIR"
> + SECRET="$SECRETDIR/token"
> + echo "$COVERITY_TOKEN" > "$SECRET"
> + echo "Building docker container..."
> + # TODO: This re-downloads the tools every time, rather than
> + # caching and reusing the image produced with the downloaded tools.
> + # Not sure why.
I remember something similar when using -f and COPY.
My guess is using -f somefile instead of a directory, then COPY from
outside of the directory, the cache is invalidated (or not used). If the
file copied and the Dockerfile are in the same directory, it works (for me).
> + # TODO: how do you get 'docker build' to print the output of the
> + # commands it is running to its stdout? This would be useful for debug.
Maybe '--progress plain'?
> + DOCKER_BUILDKIT=1 docker build -t coverity-scanner \
> + --secret id=coverity.token,src="$SECRET" \
> + -f scripts/coverity-scan/coverity-scan.docker \
> + scripts/coverity-scan
> + echo "Archiving sources to be analyzed..."
> + ./scripts/archive-source.sh "$SECRETDIR/qemu-sources.tgz"
> + if [ "$DRYRUN" = yes ]; then
> + DRYRUNARG=--dry-run
> + fi
> + echo "Running scanner..."
> + # If we need to capture the output tarball, get the inner run to
> + # save it to the secrets directory so we can copy it out before the
> + # directory is cleaned up.
> + if [ ! -z "$RESULTSTARBALL" ]; then
> + RTARGS="--results-tarball /work/cov-int.tar.xz"
> + else
> + RTARGS=""
> + fi
> + # Arrange for this docker run to get access to the sources with -v.
> + # We pass through all the configuration from the outer script to the inner.
> + export COVERITY_EMAIL COVERITY_BUILD_CMD
> + docker run -it --env COVERITY_EMAIL --env COVERITY_BUILD_CMD \
> + -v "$SECRETDIR:/work" coverity-scanner \
> + ./run-coverity-scan --version "$VERSION" \
> + --description "$DESCRIPTION" $DRYRUNARG --tokenfile /work/token \
> + --srcdir /qemu --src-tarball /work/qemu-sources.tgz $RTARGS
> + if [ ! -z "$RESULTSTARBALL" ]; then
> + echo "Copying results tarball to $RESULTSTARBALL..."
> + cp "$SECRETDIR/cov-int.tar.xz" "$RESULTSTARBALL"
> + fi
> + echo "Docker work complete."
> + exit 0
> +fi
> +
> +# Otherwise, continue with the full build and upload process.
> +
> check_upload_permissions
>
> update_coverity_tools
>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
next prev parent reply other threads:[~2020-04-14 11:59 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-19 19:33 [PATCH v2 0/6] Automation of Coverity Scan uploads (via Docker) Peter Maydell
2020-03-19 19:33 ` [PATCH v2 1/6] osdep.h: Drop no-longer-needed Coverity workarounds Peter Maydell
2020-03-20 17:17 ` Richard Henderson
2020-03-19 19:33 ` [PATCH v2 2/6] thread.h: Fix Coverity version of qemu_cond_timedwait() Peter Maydell
2020-03-20 17:18 ` Richard Henderson
2020-03-22 10:42 ` Philippe Mathieu-Daudé
2020-03-19 19:33 ` [PATCH v2 3/6] thread.h: Remove trailing semicolons from Coverity qemu_mutex_lock() etc Peter Maydell
2020-03-20 17:18 ` Richard Henderson
2020-03-22 10:41 ` Philippe Mathieu-Daudé
2020-03-19 19:33 ` [PATCH v2 4/6] linux-user/flatload.c: Use "" for include of QEMU header target_flat.h Peter Maydell
2020-03-20 17:19 ` Richard Henderson
2020-03-22 10:41 ` Philippe Mathieu-Daudé
2020-03-19 19:33 ` [PATCH v2 5/6] scripts/run-coverity-scan: Script to run Coverity Scan build Peter Maydell
2020-04-14 12:14 ` Philippe Mathieu-Daudé
2020-03-19 19:33 ` [PATCH v2 6/6] scripts/coverity-scan: Add Docker support Peter Maydell
2020-03-20 17:41 ` Paolo Bonzini
2020-04-14 11:58 ` Philippe Mathieu-Daudé [this message]
2020-04-14 12:11 ` Philippe Mathieu-Daudé
2020-04-15 12:27 ` Peter Maydell
2020-04-13 12:13 ` [PATCH v2 0/6] Automation of Coverity Scan uploads (via Docker) Peter Maydell
2020-04-13 12:40 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5012c7e4-c1ec-79e7-ac0a-f15e2eb1fd6e@redhat.com \
--to=philmd@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).