From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3634C433EF for ; Mon, 14 Feb 2022 16:52:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356689AbiBNQwp (ORCPT ); Mon, 14 Feb 2022 11:52:45 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:33096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349606AbiBNQwm (ORCPT ); Mon, 14 Feb 2022 11:52:42 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7FBC66514F for ; Mon, 14 Feb 2022 08:52:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644857553; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dP8eN32n1fiHxfiddr1V4rwBMWajCg3fmMVlkZbIQQY=; b=ft1MreFBZjUQeJWydaUJzU96uvvJAn3cCCqH9032NtNOo59HasIZhjiK2aq7XzBEWj2NSQ JNOiRERr1jiziZ2+wCvkoOK8vOk5i8XQmc7hsu4EvVk7en85++VXvNsVWB51ErRi3CKn6n ec5g05Ph9M9YeZAKMAWyIZ6mSc8YUxY= Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com [209.85.218.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-189-a-aTj4X1PBOEB3i9EBUojA-1; Mon, 14 Feb 2022 11:52:32 -0500 X-MC-Unique: a-aTj4X1PBOEB3i9EBUojA-1 Received: by mail-ej1-f71.google.com with SMTP id k16-20020a17090632d000b006ae1cdb0f07so6102235ejk.16 for ; Mon, 14 Feb 2022 08:52:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:in-reply-to:references:date :message-id:mime-version:content-transfer-encoding; bh=dP8eN32n1fiHxfiddr1V4rwBMWajCg3fmMVlkZbIQQY=; b=CfA8MoxcdiKMK5+iDFjGjIAYN7IAbSM5Tl6kLNy2JPUcHF6uJYzpnA5BQ2taMJ4uj5 NHjM/+eb17uvYz4vtMNSPKUf/0pWqJr5dp6FOUoJoVzTmteXKVPnlVif/KPTEe2HxMRQ tu05LZHJM5GDfaruyqt3FCU/+qUqZ+E5o+Xr7WAA1pn+Q4SViM+P+wmtfgzlmzZ/+T02 pqs8h738k15Vr8gzVE7pGW1ZhVcDDQRE1kW+J1XwIM/EJCgj9YCb6rymSMy+ERuB5LXR zKNZlfE8VaKQhK4AxrFyYNSjGu9ehjyx0BcUJTIFpkH+th1KcU02dL5dWeLypd4044W0 qL/A== X-Gm-Message-State: AOAM533ntmQUXn7QlU3Ajt/P8AQtdGqy5f5It4fM3HLEDTVktsuqw+4d WR7g+KSYlP9Hqs80ka/LTsLpXH3JCLasisKPPgyZOl2hgTmSXNRnqYLnOKGXMQhJOUgolQJX6dm 5WM6yEn07Sc+mjnQt X-Received: by 2002:a05:6402:d41:: with SMTP id ec1mr527042edb.196.1644857549239; Mon, 14 Feb 2022 08:52:29 -0800 (PST) X-Google-Smtp-Source: ABdhPJyYq9uIsI/I6522dStfHliGP44+7CT0Kuo/G2z03byX2cX9GlXvpxjtLgsgwapGLtrqqKLb4Q== X-Received: by 2002:a05:6402:d41:: with SMTP id ec1mr526879edb.196.1644857547235; Mon, 14 Feb 2022 08:52:27 -0800 (PST) Received: from alrua-x1.borgediget.toke.dk ([2a0c:4d80:42:443::2]) by smtp.gmail.com with ESMTPSA id m5sm4686264ejl.198.2022.02.14.08.52.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Feb 2022 08:52:26 -0800 (PST) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id 5E12512E281; Mon, 14 Feb 2022 17:52:25 +0100 (CET) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= To: Andrii Nakryiko Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Kumar Kartikeya Dwivedi , Zhiqian Guan , Networking , bpf Subject: Re: [PATCH bpf-next v2] libbpf: Use dynamically allocated buffer when receiving netlink messages In-Reply-To: References: <20220211234819.612288-1-toke@redhat.com> <87h7927q3o.fsf@toke.dk> X-Clacks-Overhead: GNU Terry Pratchett Date: Mon, 14 Feb 2022 17:52:25 +0100 Message-ID: <87a6et75me.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Andrii Nakryiko writes: > On Sun, Feb 13, 2022 at 7:17 AM Toke H=C3=B8iland-J=C3=B8rgensen wrote: >> >> Andrii Nakryiko writes: >> >> > On Fri, Feb 11, 2022 at 3:49 PM Toke H=C3=B8iland-J=C3=B8rgensen wrote: >> >> >> >> When receiving netlink messages, libbpf was using a statically alloca= ted >> >> stack buffer of 4k bytes. This happened to work fine on systems with = a 4k >> >> page size, but on systems with larger page sizes it can lead to trunc= ated >> >> messages. The user-visible impact of this was that libbpf would insis= t no >> >> XDP program was attached to some interfaces because that bit of the n= etlink >> >> message got chopped off. >> >> >> >> Fix this by switching to a dynamically allocated buffer; we borrow the >> >> approach from iproute2 of using recvmsg() with MSG_PEEK|MSG_TRUNC to = get >> >> the actual size of the pending message before receiving it, adjusting= the >> >> buffer as necessary. While we're at it, also add retries on interrupt= ed >> >> system calls around the recvmsg() call. >> >> >> >> v2: >> >> - Move peek logic to libbpf_netlink_recv(), don't double free on EN= OMEM. >> >> >> >> Reported-by: Zhiqian Guan >> >> Fixes: 8bbb77b7c7a2 ("libbpf: Add various netlink helpers") >> >> Acked-by: Kumar Kartikeya Dwivedi >> >> Signed-off-by: Toke H=C3=B8iland-J=C3=B8rgensen >> >> --- >> > >> > Applied to bpf-next. >> >> Awesome, thanks! >> >> > One improvement would be to avoid initial malloc of 4096, especially >> > if that size is enough for most cases. You could detect this through >> > iov.iov_base =3D=3D buf and not free(iov.iov_base) at the end. Seems >> > reliable and simple enough. I'll leave it up to you to follow up, if >> > you think it's a good idea. >> >> Hmm, seems distributions tend to default the stack size limit to 8k; so >> not sure if blowing half of that on a buffer just to avoid a call to >> malloc() in a non-performance-sensitive is ideal to begin with? I think >> I'd prefer to just keep the dynamic allocation... > > 8KB for user-space thread stack, really? Not 2MB by default? Are you > sure you are not confusing this with kernel threads? Ha, oops! I was looking in the right place, just got the units wrong; those were kbytes not bytes, so 8M stack size. Sorry for the confusion :) -Toke