From: Julien Olivain via buildroot <buildroot@buildroot.org>
To: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Julien Olivain via buildroot <buildroot@buildroot.org>,
Alexander Shirokov <shirokovalexs@gmail.com>
Subject: Re: [Buildroot] [PATCH 1/1] support/testing: add aichat runtime test
Date: Wed, 07 Jan 2026 19:20:40 +0100 [thread overview]
Message-ID: <e3de047e16c5cef704e38aaf77c38ef2@free.fr> (raw)
In-Reply-To: <20260107150725.10c0fba7@windsurf>
Hi Thomas,
On 07/01/2026 15:07, Thomas Petazzoni wrote:
> Hello Julien,
>
> Thanks for the patch! Obviously looks good, just one question.
>
> On Wed, 7 Jan 2026 00:11:10 +0100
> Julien Olivain via buildroot <buildroot@buildroot.org> wrote:
>
>> + def test_run(self):
>> + self.login()
>> +
>> + # Check the program can execute.
>> + self.assertRunOk("aichat --version")
>> +
>> + # We define a Hugging Face model to be downloaded.
>> + # We choose a relatively small model, for testing.
>> + hf_model = "ggml-org/gemma-3-270m-it-GGUF"
>
> Does this mean that the model will be downloaded by llama-server when
> we run it? I don't think we expect tests to require a network
> connection to the Internet, and download "random stuff".
>
> Or did I misunderstand your comment?
Your understanding is correct:
llama-server has the capability to download the model from Internet
if not available locally. This is what is happening here.
I am not sure if there was a strict rule forbidding public network
to runtime tests.
I think Yann started to add runtime tests with such public network
requirement a while back. See [1] [2] [3] [4]. I'm just following
his steps ;) This is also what motivated commit [5] for example.
I think the argumentation was that the test runner needs Internet
connectivity anyway to download source packages. Most runtime tests
does not require network connection, so I though it was more a fact
than a strict rule. Some tests (like podman, etc.) will have a better
coverage if they are working with their online repositories.
Regarding the data and its source, I would not exactly call that
"random stuff". Hugging Face [6] is a reference repository used
by llama.cpp. Also, this model [7] in particular is a smaller
version of the llama.cpp example in [8]. Note that ggml-org [9]
is reference library for llama.cpp.
Do you want me to send a v2 adding this justification?
Or do you prefer to (re?)open the topic to strictly forbid network
in runtime tests?
> Thomas
> --
> Thomas Petazzoni, co-owner and CEO, Bootlin
> Embedded Linux and Kernel engineering and training
> https://bootlin.com
Best regards,
Julien.
[1]
https://gitlab.com/buildroot.org/buildroot/-/blob/2025.11/support/testing/tests/package/test_distribution_registry.py#L80-85
[2]
https://gitlab.com/buildroot.org/buildroot/-/blob/2025.11/support/testing/tests/package/test_docker_compose.py#L42-43
[3]
https://gitlab.com/buildroot.org/buildroot/-/blob/2025.11/support/testing/tests/package/test_podman.py#L97-98
[4]
https://gitlab.com/buildroot.org/buildroot/-/blob/2025.11/support/testing/tests/package/test_skopeo.py#L50-54
[5]
https://gitlab.com/buildroot.org/buildroot/-/commit/cf8641b73e7f1577637bfef0ece78dd519b25d19
[6] https://huggingface.co/
[7] https://huggingface.co/ggml-org/gemma-3-270m-it-GGUF
[8]
https://github.com/ggml-org/llama.cpp/blob/b7271/README.md?plain=1#L53
[9] https://github.com/ggml-org
_______________________________________________
buildroot mailing list
buildroot@buildroot.org
https://lists.buildroot.org/mailman/listinfo/buildroot
next prev parent reply other threads:[~2026-01-07 18:20 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-06 23:11 [Buildroot] [PATCH 1/1] support/testing: add aichat runtime test Julien Olivain via buildroot
2026-01-07 8:50 ` Alexander Shirokov
2026-01-07 14:07 ` Thomas Petazzoni via buildroot
2026-01-07 18:20 ` Julien Olivain via buildroot [this message]
2026-02-03 8:29 ` Julien Olivain via buildroot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e3de047e16c5cef704e38aaf77c38ef2@free.fr \
--to=buildroot@buildroot.org \
--cc=ju.o@free.fr \
--cc=shirokovalexs@gmail.com \
--cc=thomas.petazzoni@bootlin.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox