From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtpout-02.galae.net (smtpout-02.galae.net [185.246.84.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 94CED23F424; Tue, 21 Oct 2025 11:41:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.246.84.56 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761046875; cv=none; b=QcCCJYzaILfeEbODLxlTMSDhB682hix1FkzqZPO1hNkhN12gZzezsfZX45M49Pf06bor/0LMy7d6BmYMH2fDlua24VYNstVsM9PSP81IFm4VtW/s3CR0pOOyA6/yE2IHNLU7ZlF0HMQ2gmZgkizKg0JomaYfMB1SzYhEcw3Aih8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761046875; c=relaxed/simple; bh=9sYuvI7CEeyOifAAPVDV+AmPzekD759B0QYeLpEGUCI=; h=Message-ID:Date:MIME-Version:From:Subject:To:Cc:References: In-Reply-To:Content-Type; b=JnV0kKNk0EGhKTVjp+JmdYeTnpDkh/ZaoD3zz2h/nwtggYPjrVQIRa9kQZZA26usoAj3qROCj4c6RM3Vutq/o11N+svP5ZS76Dyw4Y2rGWv+66mTckzrr6Lb0DIS5EicMIDAC4TjiddTvaW0p7Z0yNcD1JGpeoirhzxdWvNhpxY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=KREqtfoj; arc=none smtp.client-ip=185.246.84.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="KREqtfoj" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-02.galae.net (Postfix) with ESMTPS id C610D1A1584; Tue, 21 Oct 2025 11:41:09 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 95D0F60680; Tue, 21 Oct 2025 11:41:09 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 65A7A102F23E9; Tue, 21 Oct 2025 13:40:53 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1761046868; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:content-language:in-reply-to:references; bh=/aZ90aQifoTGdx/4SW58GhR5Eihj+AcxbbGAfR/vODk=; b=KREqtfojYKz+RqSJdaE5Q9ChNT9xLrT3xG810Ty2/IqKj+039qfFpPNtBnC6ppesSHTW+i km5xE+YWGWWwpZ2tEgwPPfMJ4jhbMH9tIvGOqUfXHRa18kcVvn78N2+j1zUtmDtSrpyx6V JFb/Wu5Z7ZPpCH3OaFqqvv403sgv8PeqLhVlgw7YT+c3v4gwixn1J0WyzU1LqdLh+o4n6k K4O+kfQmUj7K36l4m5svE1GomH3AeXFnb0kx2hlE4BJiBekEPsRGqkBjeTU7Sr8oJ7OUsu JrqA3+Kt9FevtqAOV9UxEEA5jzvDakd+aO5fumJR4Ug661kwliKPHv1yCVx2tg== Message-ID: Date: Tue, 21 Oct 2025 13:40:52 +0200 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird From: Bastien Curutchet Subject: Re: [PATCH bpf-next v5 00/15] selftests/bpf: Integrate test_xsk.c to test_progs framework To: Maciej Fijalkowski , Alexei Starovoitov Cc: =?UTF-8?B?QmrDtnJuIFTDtnBlbA==?= , Magnus Karlsson , Jonathan Lemon , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , Thomas Petazzoni , Alexis Lothore , Network Development , bpf , "open list:KERNEL SELFTEST FRAMEWORK" , LKML References: <20251016-xsk-v5-0-662c95eb8005@bootlin.com> Content-Language: en-US In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Last-TLS-Session-Version: TLSv1.3 Hi On 10/18/25 1:09 PM, Maciej Fijalkowski wrote: > On Fri, Oct 17, 2025 at 11:27:26AM -0700, Alexei Starovoitov wrote: >> On Thu, Oct 16, 2025 at 12:45 AM Bastien Curutchet (eBPF Foundation) >> wrote: >>> >>> Hi all, >>> >>> Now that the merge window is over, here's a respin of the previous >>> iteration rebased on the latest bpf-next_base. The bug triggering the >>> XDP_ADJUST_TAIL_SHRINK_MULTI_BUFF failure when CONFIG_DEBUG_VM is >>> enabled hasn't been fixed yet so I've moved the test to the flaky >>> table. >>> >>> The test_xsk.sh script covers many AF_XDP use cases. The tests it runs >>> are defined in xksxceiver.c. Since this script is used to test real >>> hardware, the goal here is to leave it as it is, and only integrate the >>> tests that run on veth peers into the test_progs framework. >>> >>> Some tests are flaky so they can't be integrated in the CI as they are. >>> I think that fixing their flakyness would require a significant amount of >>> work. So, as first step, I've excluded them from the list of tests >>> migrated to the CI (cf PATCH 14). If these tests get fixed at some >>> point, integrating them into the CI will be straightforward. >>> >>> PATCH 1 extracts test_xsk[.c/.h] from xskxceiver[.c/.h] to make the >>> tests available to test_progs. >>> PATCH 2 to 7 fix small issues in the current test >>> PATCH 8 to 13 handle all errors to release resources instead of calling >>> exit() when any error occurs. >>> PATCH 14 isolates some flaky tests >>> PATCH 15 integrate the non-flaky tests to the test_progs framework >> >> Looks good, but why does it take so long to run? >> >> time ./test_progs -t xsk >> Summary: 2/66 PASSED, 0 SKIPPED, 0 FAILED >> >> real 0m29.031s >> user 0m4.414s >> sys 0m20.893s >> >> That's a big addition to overall test_progs time. >> Could you reduce it to a couple seconds? > > it's because veth pair is setup per each test case from what i recall when > i was pointing this out during review. it does not scale. it would be > better to have veth created once for whole test suite. HTH. > The initial test_xsk.sh was already quite long, the test migration hasn't affected its execution time on my side. I've tried setting up the veth peers once for all the subtests, as suggested by Maciej; this results in about a 35% speed gain on my setup, but unfortunately, it’s still not enough to bring it down to a couple of seconds. I'll investigate it further. Best regards, Bastien