From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-173.mta1.migadu.com (out-173.mta1.migadu.com [95.215.58.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7556122A4F4 for ; Tue, 29 Jul 2025 22:45:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.173 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753829116; cv=none; b=InSO+clIy1xfGy9CVn4n4qazMcE0WhmLgcn1wson+KKLcefP6Cc0MrAOG3R3mk4G/VagN2VusLnLsVxyjVqaFtnX2hUYL2JGuwPyP0EzapdA3Lg54e92jLmdvu+FmOQzNN6C2iVYNUHUsmrxQ2oJaWqHm1xBNeXphixhltZO/zg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753829116; c=relaxed/simple; bh=qL+Vm8lMbkkauUVydnRswhxGIXQ0ZFlJGQG6dIHHVx4=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=d8z3YFrtgZjBInMizugj4ZknOuEUoUrQky5AygPCnsC+8sC/mZGEEwEvs9E0c/RdB0wG+ethELA3wo6fXkmPidV0BULNCrE/ScT9CM/rPJSmR2+j4+yeR7wJD4KPfXXpekJq795vCml9erQGQUGPFdtrA0q7iSkGGmVajLSU4nQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=NoHxaHhD; arc=none smtp.client-ip=95.215.58.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="NoHxaHhD" Message-ID: <2b69e397-a457-4dba-86f1-47b7fe87ef79@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1753829111; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XHtbR4MKyVlIuat93c1sNNhih7aR3T+tXcbxNcT2Hxc=; b=NoHxaHhDIYDryk5osFw6tdy4NWlzL14gU4kq/b+bUKuBIsTZV/UQtAPyoXgXpFpyh5Oeq1 5//b5z38AhmkFaKH5HwKpl0qtiIX3LwrmsFzt11wjD2kE1LTm6DWsFqObiJcmi915PYgLp EYTvJhYdkeoVE93Ud5FAz2T9m9mTq14= Date: Tue, 29 Jul 2025 15:45:00 -0700 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Subject: Re: [PATCH v2] bpf: fix stackmap overflow check in __bpf_get_stackid() Content-Language: en-GB To: Arnaud Lecomte , song@kernel.org, jolsa@kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, eddyz87@gmail.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com Cc: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com, syzbot+c9b724fbb41cf2538b7b@syzkaller.appspotmail.com References: <20250729165622.13794-1-contact@arnaud-lcm.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yonghong Song In-Reply-To: <20250729165622.13794-1-contact@arnaud-lcm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT On 7/29/25 9:56 AM, Arnaud Lecomte wrote: > Syzkaller reported a KASAN slab-out-of-bounds write in __bpf_get_stackid() > when copying stack trace data. The issue occurs when the perf trace > contains more stack entries than the stack map bucket can hold, > leading to an out-of-bounds write in the bucket's data array. > For build_id mode, we use sizeof(struct bpf_stack_build_id) > to determine capacity, and for normal mode we use sizeof(u64). > > Reported-by: syzbot+c9b724fbb41cf2538b7b@syzkaller.appspotmail.com > Closes: https://syzkaller.appspot.com/bug?extid=c9b724fbb41cf2538b7b > Tested-by: syzbot+c9b724fbb41cf2538b7b@syzkaller.appspotmail.com > Signed-off-by: Arnaud Lecomte Could you add a selftest? This way folks can easily find out what is the problem and why this fix solves the issue correctly. > --- > Changes in v2: > - Use utilty stack_map_data_size to compute map stack map size > --- > kernel/bpf/stackmap.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c > index 3615c06b7dfa..6f225d477f07 100644 > --- a/kernel/bpf/stackmap.c > +++ b/kernel/bpf/stackmap.c > @@ -230,7 +230,7 @@ static long __bpf_get_stackid(struct bpf_map *map, > struct bpf_stack_map *smap = container_of(map, struct bpf_stack_map, map); > struct stack_map_bucket *bucket, *new_bucket, *old_bucket; > u32 skip = flags & BPF_F_SKIP_FIELD_MASK; > - u32 hash, id, trace_nr, trace_len, i; > + u32 hash, id, trace_nr, trace_len, i, max_depth; > bool user = flags & BPF_F_USER_STACK; > u64 *ips; > bool hash_matches; > @@ -241,6 +241,12 @@ static long __bpf_get_stackid(struct bpf_map *map, > > trace_nr = trace->nr - skip; > trace_len = trace_nr * sizeof(u64); > + > + /* Clamp the trace to max allowed depth */ > + max_depth = smap->map.value_size / stack_map_data_size(map); > + if (trace_nr > max_depth) > + trace_nr = max_depth; > + > ips = trace->ip + skip; > hash = jhash2((u32 *)ips, trace_len / sizeof(u32), 0); > id = hash & (smap->n_buckets - 1);