public inbox for buildroot@busybox.net
 help / color / mirror / Atom feed
* [Buildroot] [PATCH 1/1] support/testing: add aichat runtime test
@ 2026-01-06 23:11 Julien Olivain via buildroot
  2026-01-07  8:50 ` Alexander Shirokov
  2026-01-07 14:07 ` Thomas Petazzoni via buildroot
  0 siblings, 2 replies; 5+ messages in thread
From: Julien Olivain via buildroot @ 2026-01-06 23:11 UTC (permalink / raw)
  To: buildroot; +Cc: Julien Olivain, Alexander Shirokov

Cc: Alexander Shirokov <shirokovalexs@gmail.com>
Signed-off-by: Julien Olivain <ju.o@free.fr>
---
Patch tested in:
https://gitlab.com/jolivain/buildroot/-/jobs/12624150651
---
 DEVELOPERS                                    |   2 +
 support/testing/tests/package/test_aichat.py  | 108 ++++++++++++++++++
 .../root/.config/aichat/config.yaml           |  11 ++
 3 files changed, 121 insertions(+)
 create mode 100644 support/testing/tests/package/test_aichat.py
 create mode 100644 support/testing/tests/package/test_aichat/rootfs-overlay/root/.config/aichat/config.yaml

diff --git a/DEVELOPERS b/DEVELOPERS
index f982e3123a..0db91f6f40 100644
--- a/DEVELOPERS
+++ b/DEVELOPERS
@@ -1870,6 +1870,8 @@ F:	support/testing/tests/package/test_4th.py
 F:	support/testing/tests/package/test_acl.py
 F:	support/testing/tests/package/test_acpica.py
 F:	support/testing/tests/package/test_acpica/
+F:	support/testing/tests/package/test_aichat.py
+F:	support/testing/tests/package/test_aichat/
 F:	support/testing/tests/package/test_apache.py
 F:	support/testing/tests/package/test_attr.py
 F:	support/testing/tests/package/test_audio_codec_base.py
diff --git a/support/testing/tests/package/test_aichat.py b/support/testing/tests/package/test_aichat.py
new file mode 100644
index 0000000000..7c978315c9
--- /dev/null
+++ b/support/testing/tests/package/test_aichat.py
@@ -0,0 +1,108 @@
+import json
+import os
+import time
+
+import infra.basetest
+
+
+class TestAiChat(infra.basetest.BRTest):
+    rootfs_overlay = \
+        infra.filepath("tests/package/test_aichat/rootfs-overlay")
+    config = f"""
+        BR2_aarch64=y
+        BR2_TOOLCHAIN_EXTERNAL=y
+        BR2_TOOLCHAIN_EXTERNAL_BOOTLIN=y
+        BR2_SYSTEM_DHCP="eth0"
+        BR2_LINUX_KERNEL=y
+        BR2_LINUX_KERNEL_CUSTOM_VERSION=y
+        BR2_LINUX_KERNEL_CUSTOM_VERSION_VALUE="6.18.3"
+        BR2_LINUX_KERNEL_USE_CUSTOM_CONFIG=y
+        BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE="board/qemu/aarch64-virt/linux.config"
+        BR2_PACKAGE_AICHAT=y
+        BR2_PACKAGE_CA_CERTIFICATES=y
+        BR2_PACKAGE_LIBCURL=y
+        BR2_PACKAGE_LIBCURL_CURL=y
+        BR2_PACKAGE_LLAMA_CPP=y
+        BR2_PACKAGE_LLAMA_CPP_SERVER=y
+        BR2_PACKAGE_LLAMA_CPP_TOOLS=y
+        BR2_PACKAGE_OPENSSL=y
+        BR2_ROOTFS_OVERLAY="{rootfs_overlay}"
+        BR2_TARGET_ROOTFS_EXT2=y
+        BR2_TARGET_ROOTFS_EXT2_SIZE="1024M"
+        # BR2_TARGET_ROOTFS_TAR is not set
+    """
+
+    def login(self):
+        img = os.path.join(self.builddir, "images", "rootfs.ext2")
+        kern = os.path.join(self.builddir, "images", "Image")
+        self.emulator.boot(
+            arch="aarch64",
+            kernel=kern,
+            kernel_cmdline=["root=/dev/vda"],
+            options=[
+                "-M", "virt",
+                "-cpu", "cortex-a57",
+                "-smp", "4",
+                "-m", "2G",
+                "-drive", f"file={img},if=virtio,format=raw",
+                "-net", "nic,model=virtio",
+                "-net", "user"
+            ]
+        )
+        self.emulator.login()
+
+    def test_run(self):
+        self.login()
+
+        # Check the program can execute.
+        self.assertRunOk("aichat --version")
+
+        # We define a Hugging Face model to be downloaded.
+        # We choose a relatively small model, for testing.
+        hf_model = "ggml-org/gemma-3-270m-it-GGUF"
+
+        # We define a common knowledge question to ask to the model.
+        prompt = "What is the capital of the United Kingdom?"
+
+        # We define an expected keyword, to be present in the answer.
+        expected_answer = "london"
+
+        # We set few llama-server options:
+        llama_opts = "--log-file /tmp/llama-server.log"
+        # We set a fixed seed, to reduce variability of the test
+        llama_opts += " --seed 123456789"
+        llama_opts += f" --hf-repo {hf_model}"
+
+        # We start a llama-server in background, which will expose an
+        # openai-compatible API to be used by aichat.
+        cmd = f"( llama-server {llama_opts} &>/dev/null & )"
+        self.assertRunOk(cmd)
+
+        # We wait for the llama-server to be ready. We query the
+        # available models API to check the server is ready. We expect
+        # to see the our model. We also add an extra "echo" to add an
+        # extra newline.
+        cmd = "curl http://127.0.0.1:8080/v1/models && echo"
+        for attempt in range(20 * self.timeout_multiplier):
+            time.sleep(5)
+            # To debug the llama-server startup, uncomment the
+            # following line:
+            # self.assertRunOk("cat /tmp/llama-server.log")
+            out, ret = self.emulator.run(cmd)
+            if ret == 0:
+                models_json = "".join(out)
+                models = json.loads(models_json)
+                model_name = models['models'][0]['name']
+                if model_name == hf_model:
+                    break
+        else:
+            self.fail("Timeout while waiting for llama-server.")
+
+        # We ask our question and check the expected answer is present
+        # in the output. We pipe the output in "cat" to suppress the
+        # aichat UTF-8 spinner (aichat stdout will not be a tty).
+        cmd = f"aichat '{prompt}' | cat"
+        out, ret = self.emulator.run(cmd, timeout=120)
+        self.assertEqual(ret, 0)
+        out_str = "\n".join(out).lower()
+        self.assertIn(expected_answer, out_str)
diff --git a/support/testing/tests/package/test_aichat/rootfs-overlay/root/.config/aichat/config.yaml b/support/testing/tests/package/test_aichat/rootfs-overlay/root/.config/aichat/config.yaml
new file mode 100644
index 0000000000..f6a054696e
--- /dev/null
+++ b/support/testing/tests/package/test_aichat/rootfs-overlay/root/.config/aichat/config.yaml
@@ -0,0 +1,11 @@
+# see https://github.com/sigoden/aichat/blob/main/config.example.yaml
+
+model: llama-server:ggml-org/gemma-3-270m-it-GGUF
+stream: false
+highlight: false
+clients:
+- type: openai-compatible
+  name: llama-server
+  api_base: http://127.0.0.1:8080/v1
+  models:
+  - name: ggml-org/gemma-3-270m-it-GGUF
-- 
2.52.0

_______________________________________________
buildroot mailing list
buildroot@buildroot.org
https://lists.buildroot.org/mailman/listinfo/buildroot

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-02-03  8:29 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-06 23:11 [Buildroot] [PATCH 1/1] support/testing: add aichat runtime test Julien Olivain via buildroot
2026-01-07  8:50 ` Alexander Shirokov
2026-01-07 14:07 ` Thomas Petazzoni via buildroot
2026-01-07 18:20   ` Julien Olivain via buildroot
2026-02-03  8:29     ` Julien Olivain via buildroot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox