* [Buildroot] [PATCH 1/1] support/testing: add aichat runtime test
@ 2026-01-06 23:11 Julien Olivain via buildroot
2026-01-07 8:50 ` Alexander Shirokov
2026-01-07 14:07 ` Thomas Petazzoni via buildroot
0 siblings, 2 replies; 5+ messages in thread
From: Julien Olivain via buildroot @ 2026-01-06 23:11 UTC (permalink / raw)
To: buildroot; +Cc: Julien Olivain, Alexander Shirokov
Cc: Alexander Shirokov <shirokovalexs@gmail.com>
Signed-off-by: Julien Olivain <ju.o@free.fr>
---
Patch tested in:
https://gitlab.com/jolivain/buildroot/-/jobs/12624150651
---
DEVELOPERS | 2 +
support/testing/tests/package/test_aichat.py | 108 ++++++++++++++++++
.../root/.config/aichat/config.yaml | 11 ++
3 files changed, 121 insertions(+)
create mode 100644 support/testing/tests/package/test_aichat.py
create mode 100644 support/testing/tests/package/test_aichat/rootfs-overlay/root/.config/aichat/config.yaml
diff --git a/DEVELOPERS b/DEVELOPERS
index f982e3123a..0db91f6f40 100644
--- a/DEVELOPERS
+++ b/DEVELOPERS
@@ -1870,6 +1870,8 @@ F: support/testing/tests/package/test_4th.py
F: support/testing/tests/package/test_acl.py
F: support/testing/tests/package/test_acpica.py
F: support/testing/tests/package/test_acpica/
+F: support/testing/tests/package/test_aichat.py
+F: support/testing/tests/package/test_aichat/
F: support/testing/tests/package/test_apache.py
F: support/testing/tests/package/test_attr.py
F: support/testing/tests/package/test_audio_codec_base.py
diff --git a/support/testing/tests/package/test_aichat.py b/support/testing/tests/package/test_aichat.py
new file mode 100644
index 0000000000..7c978315c9
--- /dev/null
+++ b/support/testing/tests/package/test_aichat.py
@@ -0,0 +1,108 @@
+import json
+import os
+import time
+
+import infra.basetest
+
+
+class TestAiChat(infra.basetest.BRTest):
+ rootfs_overlay = \
+ infra.filepath("tests/package/test_aichat/rootfs-overlay")
+ config = f"""
+ BR2_aarch64=y
+ BR2_TOOLCHAIN_EXTERNAL=y
+ BR2_TOOLCHAIN_EXTERNAL_BOOTLIN=y
+ BR2_SYSTEM_DHCP="eth0"
+ BR2_LINUX_KERNEL=y
+ BR2_LINUX_KERNEL_CUSTOM_VERSION=y
+ BR2_LINUX_KERNEL_CUSTOM_VERSION_VALUE="6.18.3"
+ BR2_LINUX_KERNEL_USE_CUSTOM_CONFIG=y
+ BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE="board/qemu/aarch64-virt/linux.config"
+ BR2_PACKAGE_AICHAT=y
+ BR2_PACKAGE_CA_CERTIFICATES=y
+ BR2_PACKAGE_LIBCURL=y
+ BR2_PACKAGE_LIBCURL_CURL=y
+ BR2_PACKAGE_LLAMA_CPP=y
+ BR2_PACKAGE_LLAMA_CPP_SERVER=y
+ BR2_PACKAGE_LLAMA_CPP_TOOLS=y
+ BR2_PACKAGE_OPENSSL=y
+ BR2_ROOTFS_OVERLAY="{rootfs_overlay}"
+ BR2_TARGET_ROOTFS_EXT2=y
+ BR2_TARGET_ROOTFS_EXT2_SIZE="1024M"
+ # BR2_TARGET_ROOTFS_TAR is not set
+ """
+
+ def login(self):
+ img = os.path.join(self.builddir, "images", "rootfs.ext2")
+ kern = os.path.join(self.builddir, "images", "Image")
+ self.emulator.boot(
+ arch="aarch64",
+ kernel=kern,
+ kernel_cmdline=["root=/dev/vda"],
+ options=[
+ "-M", "virt",
+ "-cpu", "cortex-a57",
+ "-smp", "4",
+ "-m", "2G",
+ "-drive", f"file={img},if=virtio,format=raw",
+ "-net", "nic,model=virtio",
+ "-net", "user"
+ ]
+ )
+ self.emulator.login()
+
+ def test_run(self):
+ self.login()
+
+ # Check the program can execute.
+ self.assertRunOk("aichat --version")
+
+ # We define a Hugging Face model to be downloaded.
+ # We choose a relatively small model, for testing.
+ hf_model = "ggml-org/gemma-3-270m-it-GGUF"
+
+ # We define a common knowledge question to ask to the model.
+ prompt = "What is the capital of the United Kingdom?"
+
+ # We define an expected keyword, to be present in the answer.
+ expected_answer = "london"
+
+ # We set few llama-server options:
+ llama_opts = "--log-file /tmp/llama-server.log"
+ # We set a fixed seed, to reduce variability of the test
+ llama_opts += " --seed 123456789"
+ llama_opts += f" --hf-repo {hf_model}"
+
+ # We start a llama-server in background, which will expose an
+ # openai-compatible API to be used by aichat.
+ cmd = f"( llama-server {llama_opts} &>/dev/null & )"
+ self.assertRunOk(cmd)
+
+ # We wait for the llama-server to be ready. We query the
+ # available models API to check the server is ready. We expect
+ # to see the our model. We also add an extra "echo" to add an
+ # extra newline.
+ cmd = "curl http://127.0.0.1:8080/v1/models && echo"
+ for attempt in range(20 * self.timeout_multiplier):
+ time.sleep(5)
+ # To debug the llama-server startup, uncomment the
+ # following line:
+ # self.assertRunOk("cat /tmp/llama-server.log")
+ out, ret = self.emulator.run(cmd)
+ if ret == 0:
+ models_json = "".join(out)
+ models = json.loads(models_json)
+ model_name = models['models'][0]['name']
+ if model_name == hf_model:
+ break
+ else:
+ self.fail("Timeout while waiting for llama-server.")
+
+ # We ask our question and check the expected answer is present
+ # in the output. We pipe the output in "cat" to suppress the
+ # aichat UTF-8 spinner (aichat stdout will not be a tty).
+ cmd = f"aichat '{prompt}' | cat"
+ out, ret = self.emulator.run(cmd, timeout=120)
+ self.assertEqual(ret, 0)
+ out_str = "\n".join(out).lower()
+ self.assertIn(expected_answer, out_str)
diff --git a/support/testing/tests/package/test_aichat/rootfs-overlay/root/.config/aichat/config.yaml b/support/testing/tests/package/test_aichat/rootfs-overlay/root/.config/aichat/config.yaml
new file mode 100644
index 0000000000..f6a054696e
--- /dev/null
+++ b/support/testing/tests/package/test_aichat/rootfs-overlay/root/.config/aichat/config.yaml
@@ -0,0 +1,11 @@
+# see https://github.com/sigoden/aichat/blob/main/config.example.yaml
+
+model: llama-server:ggml-org/gemma-3-270m-it-GGUF
+stream: false
+highlight: false
+clients:
+- type: openai-compatible
+ name: llama-server
+ api_base: http://127.0.0.1:8080/v1
+ models:
+ - name: ggml-org/gemma-3-270m-it-GGUF
--
2.52.0
_______________________________________________
buildroot mailing list
buildroot@buildroot.org
https://lists.buildroot.org/mailman/listinfo/buildroot
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [Buildroot] [PATCH 1/1] support/testing: add aichat runtime test
2026-01-06 23:11 [Buildroot] [PATCH 1/1] support/testing: add aichat runtime test Julien Olivain via buildroot
@ 2026-01-07 8:50 ` Alexander Shirokov
2026-01-07 14:07 ` Thomas Petazzoni via buildroot
1 sibling, 0 replies; 5+ messages in thread
From: Alexander Shirokov @ 2026-01-07 8:50 UTC (permalink / raw)
To: buildroot; +Cc: ju.o, shirokovalexs
Hello Julien,
Thank you for adding the tests. I reviewed them and ran on my side. Everything
works fine.
Tested-by: Alexander Shirokov <shirokovalexs@gmail.com>
_______________________________________________
buildroot mailing list
buildroot@buildroot.org
https://lists.buildroot.org/mailman/listinfo/buildroot
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Buildroot] [PATCH 1/1] support/testing: add aichat runtime test
2026-01-06 23:11 [Buildroot] [PATCH 1/1] support/testing: add aichat runtime test Julien Olivain via buildroot
2026-01-07 8:50 ` Alexander Shirokov
@ 2026-01-07 14:07 ` Thomas Petazzoni via buildroot
2026-01-07 18:20 ` Julien Olivain via buildroot
1 sibling, 1 reply; 5+ messages in thread
From: Thomas Petazzoni via buildroot @ 2026-01-07 14:07 UTC (permalink / raw)
To: Julien Olivain via buildroot; +Cc: Julien Olivain, Alexander Shirokov
Hello Julien,
Thanks for the patch! Obviously looks good, just one question.
On Wed, 7 Jan 2026 00:11:10 +0100
Julien Olivain via buildroot <buildroot@buildroot.org> wrote:
> + def test_run(self):
> + self.login()
> +
> + # Check the program can execute.
> + self.assertRunOk("aichat --version")
> +
> + # We define a Hugging Face model to be downloaded.
> + # We choose a relatively small model, for testing.
> + hf_model = "ggml-org/gemma-3-270m-it-GGUF"
Does this mean that the model will be downloaded by llama-server when
we run it? I don't think we expect tests to require a network
connection to the Internet, and download "random stuff".
Or did I misunderstand your comment?
Thomas
--
Thomas Petazzoni, co-owner and CEO, Bootlin
Embedded Linux and Kernel engineering and training
https://bootlin.com
_______________________________________________
buildroot mailing list
buildroot@buildroot.org
https://lists.buildroot.org/mailman/listinfo/buildroot
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Buildroot] [PATCH 1/1] support/testing: add aichat runtime test
2026-01-07 14:07 ` Thomas Petazzoni via buildroot
@ 2026-01-07 18:20 ` Julien Olivain via buildroot
2026-02-03 8:29 ` Julien Olivain via buildroot
0 siblings, 1 reply; 5+ messages in thread
From: Julien Olivain via buildroot @ 2026-01-07 18:20 UTC (permalink / raw)
To: Thomas Petazzoni; +Cc: Julien Olivain via buildroot, Alexander Shirokov
Hi Thomas,
On 07/01/2026 15:07, Thomas Petazzoni wrote:
> Hello Julien,
>
> Thanks for the patch! Obviously looks good, just one question.
>
> On Wed, 7 Jan 2026 00:11:10 +0100
> Julien Olivain via buildroot <buildroot@buildroot.org> wrote:
>
>> + def test_run(self):
>> + self.login()
>> +
>> + # Check the program can execute.
>> + self.assertRunOk("aichat --version")
>> +
>> + # We define a Hugging Face model to be downloaded.
>> + # We choose a relatively small model, for testing.
>> + hf_model = "ggml-org/gemma-3-270m-it-GGUF"
>
> Does this mean that the model will be downloaded by llama-server when
> we run it? I don't think we expect tests to require a network
> connection to the Internet, and download "random stuff".
>
> Or did I misunderstand your comment?
Your understanding is correct:
llama-server has the capability to download the model from Internet
if not available locally. This is what is happening here.
I am not sure if there was a strict rule forbidding public network
to runtime tests.
I think Yann started to add runtime tests with such public network
requirement a while back. See [1] [2] [3] [4]. I'm just following
his steps ;) This is also what motivated commit [5] for example.
I think the argumentation was that the test runner needs Internet
connectivity anyway to download source packages. Most runtime tests
does not require network connection, so I though it was more a fact
than a strict rule. Some tests (like podman, etc.) will have a better
coverage if they are working with their online repositories.
Regarding the data and its source, I would not exactly call that
"random stuff". Hugging Face [6] is a reference repository used
by llama.cpp. Also, this model [7] in particular is a smaller
version of the llama.cpp example in [8]. Note that ggml-org [9]
is reference library for llama.cpp.
Do you want me to send a v2 adding this justification?
Or do you prefer to (re?)open the topic to strictly forbid network
in runtime tests?
> Thomas
> --
> Thomas Petazzoni, co-owner and CEO, Bootlin
> Embedded Linux and Kernel engineering and training
> https://bootlin.com
Best regards,
Julien.
[1]
https://gitlab.com/buildroot.org/buildroot/-/blob/2025.11/support/testing/tests/package/test_distribution_registry.py#L80-85
[2]
https://gitlab.com/buildroot.org/buildroot/-/blob/2025.11/support/testing/tests/package/test_docker_compose.py#L42-43
[3]
https://gitlab.com/buildroot.org/buildroot/-/blob/2025.11/support/testing/tests/package/test_podman.py#L97-98
[4]
https://gitlab.com/buildroot.org/buildroot/-/blob/2025.11/support/testing/tests/package/test_skopeo.py#L50-54
[5]
https://gitlab.com/buildroot.org/buildroot/-/commit/cf8641b73e7f1577637bfef0ece78dd519b25d19
[6] https://huggingface.co/
[7] https://huggingface.co/ggml-org/gemma-3-270m-it-GGUF
[8]
https://github.com/ggml-org/llama.cpp/blob/b7271/README.md?plain=1#L53
[9] https://github.com/ggml-org
_______________________________________________
buildroot mailing list
buildroot@buildroot.org
https://lists.buildroot.org/mailman/listinfo/buildroot
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Buildroot] [PATCH 1/1] support/testing: add aichat runtime test
2026-01-07 18:20 ` Julien Olivain via buildroot
@ 2026-02-03 8:29 ` Julien Olivain via buildroot
0 siblings, 0 replies; 5+ messages in thread
From: Julien Olivain via buildroot @ 2026-02-03 8:29 UTC (permalink / raw)
To: Julien Olivain
Cc: Thomas Petazzoni, Julien Olivain via buildroot,
Alexander Shirokov
Hi,
On 07/01/2026 19:20, Julien Olivain via buildroot wrote:
> Hi Thomas,
>
> On 07/01/2026 15:07, Thomas Petazzoni wrote:
>> Hello Julien,
>>
>> Thanks for the patch! Obviously looks good, just one question.
>>
>> On Wed, 7 Jan 2026 00:11:10 +0100
>> Julien Olivain via buildroot <buildroot@buildroot.org> wrote:
>>
>>> + def test_run(self):
>>> + self.login()
>>> +
>>> + # Check the program can execute.
>>> + self.assertRunOk("aichat --version")
>>> +
>>> + # We define a Hugging Face model to be downloaded.
>>> + # We choose a relatively small model, for testing.
>>> + hf_model = "ggml-org/gemma-3-270m-it-GGUF"
>>
>> Does this mean that the model will be downloaded by llama-server when
>> we run it? I don't think we expect tests to require a network
>> connection to the Internet, and download "random stuff".
>>
>> Or did I misunderstand your comment?
>
> Your understanding is correct:
> llama-server has the capability to download the model from Internet
> if not available locally. This is what is happening here.
>
> I am not sure if there was a strict rule forbidding public network
> to runtime tests.
>
> I think Yann started to add runtime tests with such public network
> requirement a while back. See [1] [2] [3] [4]. I'm just following
> his steps ;) This is also what motivated commit [5] for example.
>
> I think the argumentation was that the test runner needs Internet
> connectivity anyway to download source packages. Most runtime tests
> does not require network connection, so I though it was more a fact
> than a strict rule. Some tests (like podman, etc.) will have a better
> coverage if they are working with their online repositories.
>
> Regarding the data and its source, I would not exactly call that
> "random stuff". Hugging Face [6] is a reference repository used
> by llama.cpp. Also, this model [7] in particular is a smaller
> version of the llama.cpp example in [8]. Note that ggml-org [9]
> is reference library for llama.cpp.
>
> Do you want me to send a v2 adding this justification?
>
> Or do you prefer to (re?)open the topic to strictly forbid network
> in runtime tests?
After discussing during the Buildroot Dev Days 2026, we decided
that runtime tests should avoid using network whenever possible.
But in cases where a tool functionality depends on the network,
it could be used.
So for this reason, I applied this patch.
>> Thomas
>> --
>> Thomas Petazzoni, co-owner and CEO, Bootlin
>> Embedded Linux and Kernel engineering and training
>> https://bootlin.com
>
> Best regards,
>
> Julien.
>
> [1]
> https://gitlab.com/buildroot.org/buildroot/-/blob/2025.11/support/testing/tests/package/test_distribution_registry.py#L80-85
> [2]
> https://gitlab.com/buildroot.org/buildroot/-/blob/2025.11/support/testing/tests/package/test_docker_compose.py#L42-43
> [3]
> https://gitlab.com/buildroot.org/buildroot/-/blob/2025.11/support/testing/tests/package/test_podman.py#L97-98
> [4]
> https://gitlab.com/buildroot.org/buildroot/-/blob/2025.11/support/testing/tests/package/test_skopeo.py#L50-54
> [5]
> https://gitlab.com/buildroot.org/buildroot/-/commit/cf8641b73e7f1577637bfef0ece78dd519b25d19
> [6] https://huggingface.co/
> [7] https://huggingface.co/ggml-org/gemma-3-270m-it-GGUF
> [8]
> https://github.com/ggml-org/llama.cpp/blob/b7271/README.md?plain=1#L53
> [9] https://github.com/ggml-org
Best regards,
Julien.
_______________________________________________
buildroot mailing list
buildroot@buildroot.org
https://lists.buildroot.org/mailman/listinfo/buildroot
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-02-03 8:29 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-06 23:11 [Buildroot] [PATCH 1/1] support/testing: add aichat runtime test Julien Olivain via buildroot
2026-01-07 8:50 ` Alexander Shirokov
2026-01-07 14:07 ` Thomas Petazzoni via buildroot
2026-01-07 18:20 ` Julien Olivain via buildroot
2026-02-03 8:29 ` Julien Olivain via buildroot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox