From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCF4AC4345F for ; Mon, 22 Apr 2024 18:01:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 13C576B0085; Mon, 22 Apr 2024 14:01:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0ED206B0087; Mon, 22 Apr 2024 14:01:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF61F6B0088; Mon, 22 Apr 2024 14:01:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D251B6B0085 for ; Mon, 22 Apr 2024 14:01:44 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 563CD120CC3 for ; Mon, 22 Apr 2024 18:01:44 +0000 (UTC) X-FDA: 82037935728.14.509E97A Received: from mail-wr1-f53.google.com (mail-wr1-f53.google.com [209.85.221.53]) by imf12.hostedemail.com (Postfix) with ESMTP id 5A62B4002D for ; Mon, 22 Apr 2024 18:01:42 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=pFIUbPzV; spf=pass (imf12.hostedemail.com: domain of vdonnefort@google.com designates 209.85.221.53 as permitted sender) smtp.mailfrom=vdonnefort@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713808902; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=shgVcXlk+FVX+bp2BEEUZI/jQ7HfqPQxw3MBWT1z4eE=; b=J2y5oeIM+2STP2fdaJi+FWciSnD+k1X7rcJQP4fn6YQZkfyoewSQhZWbQ9MWaUjpalsfOo UG0yPLjzzti4bpsktGdRlIAiw7aQS6L0rwvBocJulzSbwrf9BT/62x5I7zIaaHaPcVjOEJ bvF0wsJk/N6Zru66AKXnEsbKYuNSwrA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713808902; a=rsa-sha256; cv=none; b=GwK2PE0bAjJ4zpTaEuMFvKug3nq+I+htg9qXg4nXWnRx938hSkVn/DKaNp9su5i7ijelOr 0c0FIEJed1Qf4gLJqatSaqh+h+kqKwczGWKNhkvhIrV489HzfFXLEkUD2s8k3XzdDy/tqE IL53U5T//F05BSh+kIYSWZyeo41LdZY= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=pFIUbPzV; spf=pass (imf12.hostedemail.com: domain of vdonnefort@google.com designates 209.85.221.53 as permitted sender) smtp.mailfrom=vdonnefort@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-wr1-f53.google.com with SMTP id ffacd0b85a97d-346406a5fb9so3793559f8f.1 for ; Mon, 22 Apr 2024 11:01:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713808901; x=1714413701; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=shgVcXlk+FVX+bp2BEEUZI/jQ7HfqPQxw3MBWT1z4eE=; b=pFIUbPzV1MZK/CVN9AmrCVlg94gWvM/OeRV9OI7JzNzX/2Af4DlhVV5J/fce5lWHGo /Llr/BK9xCsp6Nv4k1e6rYtGTwcfbr8i0EkBZ+Bols9J7hbfKrmcK52GSbfcqgzpPlZ+ eAi6Wa4D4DOD5NwnxjxxoiUXPNyTPA2fwDDP2r41b59lR0YrctusGnWhshE0nGnzN+u1 CnW1kzE3xduCtbmTxPaH5alQzX6l46jkXHIJ+IeTQJH+CWA0QoQYg+RH0B/3xgdlVP07 Mgo83IoJU50J8o1ok7PdtGuzh4SpkX9y6D9TWV7KaPWwrDPrvKpYdwu2eahXyHlM2Mdh 4lxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713808901; x=1714413701; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=shgVcXlk+FVX+bp2BEEUZI/jQ7HfqPQxw3MBWT1z4eE=; b=ZzaFqQ4u42StvggZ2kvjQ/Sfg2alxOyH34Cq/tDYWJ5gUg52HTOT9pnIMwrDJehqUG nU18R2jmJFr8joke27heHOHclpM71U1zsf48pxCpjgYi7uk45WK2DAugNVBNnRK+R86p G8YntIc8GCmJ6+OqC0OmSt4OcLZwBgr/Z56RenXrw0IUzpm2mEqmHpD6syW+Q+l/Yem2 QKRCCoSIGCCYK84JVAUaqXRH5eW53hHI3OVO7aBpgPdNWqI/u03qcQkOdlPCXf6QG0IZ +B5PG/UdfKvViGga3vKNgTo5HtKAnsHKYHWvlfwhpwRyYuy1a7THNanbnqvCKMtk/5XW P2Qw== X-Forwarded-Encrypted: i=1; AJvYcCWMs4PwPzmat9iEuPr4xp+m5CW2JlZQmXkc3nl0B11vME2dKYRP4qDtPUXbeSFzovnC4qkKk1SRcs7ElRmN8UAyEHY= X-Gm-Message-State: AOJu0Ywqpix1oHULMrDeGFdKkeFH7OFsPia+XJ6tG2EsuVJOq/PjPEuX oY1mkWdxZeaD4CMUhGgo4Q0QRrfk7mPE+XezaGBvuLuDxYcq/cMaG3UmF1uXow== X-Google-Smtp-Source: AGHT+IE1x5f0XDD/zKcEj2pcGFDgnFjkg4FljkbTL4ZetbAyg3XH2QZFW68srkdrdsyfH3ihUuN/4w== X-Received: by 2002:adf:e392:0:b0:34a:7066:d2c6 with SMTP id e18-20020adfe392000000b0034a7066d2c6mr5764355wrm.50.1713808900564; Mon, 22 Apr 2024 11:01:40 -0700 (PDT) Received: from google.com (88.140.78.34.bc.googleusercontent.com. [34.78.140.88]) by smtp.gmail.com with ESMTPSA id f9-20020a0560001b0900b003456c693fa4sm12540223wrz.93.2024.04.22.11.01.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Apr 2024 11:01:40 -0700 (PDT) Date: Mon, 22 Apr 2024 19:01:36 +0100 From: Vincent Donnefort To: Steven Rostedt Cc: Mike Rapoport , mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, mathieu.desnoyers@efficios.com, kernel-team@android.com, rdunlap@infradead.org, linux-mm@kvack.org Subject: Re: [PATCH v20 2/5] ring-buffer: Introducing ring-buffer mapping functions Message-ID: References: <20240406173649.3210836-1-vdonnefort@google.com> <20240406173649.3210836-3-vdonnefort@google.com> <20240418234346.4b4cb4dc@rorschach.local.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240418234346.4b4cb4dc@rorschach.local.home> X-Stat-Signature: jhscwxen41gs9xf6e1bfyqf9uxtao9sx X-Rspamd-Queue-Id: 5A62B4002D X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1713808902-180285 X-HE-Meta: U2FsdGVkX1+z/lwW+Eu+rkjZ+8Sju0WxjKLVsyiJgDM9dEe8FkMgHrhHIyuMDuvV25Tkt80JgHY7YvFMFGp9iM6Z/DUshIg4q6lBUPSIY2VGIAKZmBXllvUJQKerqOWT0FcdR0zv30SfrV4rL/39othF8qAdk4rD+YjbqpZZO733/W/12t76KZxLxapszxEq2JzCcS+m+qNRavmh0slhGGA6UpP3RQwbg/1KzUabjtYRvmOGcoT4vX60ZbOHba3E2CT7E+71cjBdEQhDqdnczvGBhU4ewaQfTyD4UnTFmiRXsFt8tkx//o6mc+tptmZtdpcKlP5e7/cjcG0CvsktcNASSoPdoNqVdzDbKl3RQ1MlwZc7wvFQMwArm1cu0eJicHnoVUaJXaQU1MHABUrp3iqpaBiAn/6MPPFZPOWtQtpKH8ALYjnDvgDhBHKp0ve0Z1ar0Ta15bPMrQug41TjPVwfKIq6ijgrCYdEqqhfEzpWyfndJQtdj0s1ooUNQCDQXmaW+lwAN2kO4D2hmW9q41bs9LBhI0jbalSeyRtS9RiSJbRpO9dN74LBMqD7MyH56z1MIy3oBivoKIOB8p+r6C81Fr1ims/hb59qcQ4EAqFBoxTSf3lze612frN+EnPx31l5jcdkFoPo6d7l+yBfoASk4v11GLvqprDHtYWgLiz79T5JXO6DtQ0IDBnQy5uWvn6byF7obIVay9+W0d2kTLXsxvd9Gof5Q1NiPVffcd30OmJNq1YkE34mjs8Vh16IGvRc61UjKQXE1oE7SXIJOJx9M/xC7loVZC6v3p7NoIimASFAEdMhlpBVqJu9AJwefz8fFz/M2VJE17NXKWFPbr3i6cda/twofnvkQ9yK38+jbOxU+8n+sifl5ASHuCbik7qHPY2XXIhrzHJk+nrxO6v5814hNOcmGmFqiOd+oCsHl4yeQvNqi773TQ/a6SOff42Hqpw10xsCeCb/tSG 2Od9/csH VlVVVkXwr2AVdJik1UCAz7w2HpKghM0FCtAZjeEZUwjv00A4Z4RX9hHiCtBNywAjPvr1/OXsd0I7kcGcJjQLbwHn0v9e6NstNHTVo+erasudpkuv9mDvQgeW3+fGkfviq7qpQGvCRqVCIK1Zyz9PllhdNkkM4zqd5kcNOCwU7vri+9ElSfPSpDxWNQ/zxMQY8y8w8CRVQlJuMEC1OqGg9cV0TT8NxsiADUsRiBgP9EhP7I6Ej8l63ovtvu5FeFUdtfJbyPl3Qk6HGzCxQBr1SAAZfZIwbtMyxLsfNBg+sI2czgrIZaD1UWfQSedV14PcDRQpbOl9lz9wrjZWUwv3LEzcF/75LmM/V+tRqOnMVgIeN7qh4OOJvF1+ykFYLYhPSOBtQCfZtWP/wqMg8abDe9udCEWJQykXiwSg7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Apr 18, 2024 at 11:43:46PM -0400, Steven Rostedt wrote: > On Thu, 18 Apr 2024 09:55:55 +0300 > Mike Rapoport wrote: > > Hi Mike, > > Thanks for doing this review! > > > > +/** > > > + * struct trace_buffer_meta - Ring-buffer Meta-page description > > > + * @meta_page_size: Size of this meta-page. > > > + * @meta_struct_len: Size of this structure. > > > + * @subbuf_size: Size of each sub-buffer. > > > + * @nr_subbufs: Number of subbfs in the ring-buffer, including the reader. > > > + * @reader.lost_events: Number of events lost at the time of the reader swap. > > > + * @reader.id: subbuf ID of the current reader. ID range [0 : @nr_subbufs - 1] > > > + * @reader.read: Number of bytes read on the reader subbuf. > > > + * @flags: Placeholder for now, 0 until new features are supported. > > > + * @entries: Number of entries in the ring-buffer. > > > + * @overrun: Number of entries lost in the ring-buffer. > > > + * @read: Number of entries that have been read. > > > + * @Reserved1: Reserved for future use. > > > + * @Reserved2: Reserved for future use. > > > + */ > > > +struct trace_buffer_meta { > > > + __u32 meta_page_size; > > > + __u32 meta_struct_len; > > > + > > > + __u32 subbuf_size; > > > + __u32 nr_subbufs; > > > + > > > + struct { > > > + __u64 lost_events; > > > + __u32 id; > > > + __u32 read; > > > + } reader; > > > + > > > + __u64 flags; > > > + > > > + __u64 entries; > > > + __u64 overrun; > > > + __u64 read; > > > + > > > + __u64 Reserved1; > > > + __u64 Reserved2; > > > > Why do you need reserved fields? This structure always resides in the > > beginning of a page and the rest of the page is essentially "reserved". > > So this code is also going to be used in arm's pkvm hypervisor code, > where it will be using these fields, but since we are looking at > keeping the same interface between the two, we don't want these used by > this interface. > > We probably should add a comment about that. > > > > > > +}; > > > + > > > +#endif /* _TRACE_MMAP_H_ */ > > > diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c > > > index cc9ebe593571..793ecc454039 100644 > > > --- a/kernel/trace/ring_buffer.c > > > +++ b/kernel/trace/ring_buffer.c > > > > ... > > > > > +static void rb_setup_ids_meta_page(struct ring_buffer_per_cpu *cpu_buffer, > > > + unsigned long *subbuf_ids) > > > +{ > > > + struct trace_buffer_meta *meta = cpu_buffer->meta_page; > > > + unsigned int nr_subbufs = cpu_buffer->nr_pages + 1; > > > + struct buffer_page *first_subbuf, *subbuf; > > > + int id = 0; > > > + > > > + subbuf_ids[id] = (unsigned long)cpu_buffer->reader_page->page; > > > + cpu_buffer->reader_page->id = id++; > > > + > > > + first_subbuf = subbuf = rb_set_head_page(cpu_buffer); > > > + do { > > > + if (WARN_ON(id >= nr_subbufs)) > > > + break; > > > + > > > + subbuf_ids[id] = (unsigned long)subbuf->page; > > > + subbuf->id = id; > > > + > > > + rb_inc_page(&subbuf); > > > + id++; > > > + } while (subbuf != first_subbuf); > > > + > > > + /* install subbuf ID to kern VA translation */ > > > + cpu_buffer->subbuf_ids = subbuf_ids; > > > + > > > + /* __rb_map_vma() pads the meta-page to align it with the sub-buffers */ > > > + meta->meta_page_size = PAGE_SIZE << cpu_buffer->buffer->subbuf_order; > > > > Isn't this a single page? > > One thing we are doing is to make sure that the subbuffers are aligned > by their size. If a subbuffer is 3 pages, it should be aligned on 3 > page boundaries. This was something that Linus suggested. > > > > > > + meta->meta_struct_len = sizeof(*meta); > > > + meta->nr_subbufs = nr_subbufs; > > > + meta->subbuf_size = cpu_buffer->buffer->subbuf_size + BUF_PAGE_HDR_SIZE; > > > + > > > + rb_update_meta_page(cpu_buffer); > > > +} > > > > ... > > > > > +#define subbuf_page(off, start) \ > > > + virt_to_page((void *)((start) + ((off) << PAGE_SHIFT))) > > > + > > > +#define foreach_subbuf_page(sub_order, start, page) \ > > > > Nit: usually iterators in kernel use for_each > > Ah, good catch. Yeah, that should be changed. But then ... > > > > > > + page = subbuf_page(0, (start)); \ > > > + for (int __off = 0; __off < (1 << (sub_order)); \ > > > + __off++, page = subbuf_page(__off, (start))) > > > > The pages are allocated with alloc_pages_node(.. subbuf_order) are > > physically contiguous and struct pages for them are also contiguous, so > > inside a subbuf_order allocation you can just do page++. > > > > I'm wondering if we should just nuke the macro. It was there because of > the previous implementation did things twice. But now it's just done > once here: > > + while (s < nr_subbufs && p < nr_pages) { > + struct page *page; > + > + foreach_subbuf_page(subbuf_order, cpu_buffer->subbuf_ids[s], page) { > + if (p >= nr_pages) > + break; > + > + pages[p++] = page; > + } > + s++; > + } > > Perhaps that should just be: > > while (s < nr_subbufs && p < nr_pages) { > struct page *page; > int off; > > page = subbuf_page(0, cpu_buffer->subbuf_ids[s]); > for (off = 0; off < (1 << subbuf_order); off++, page++, p++) { > if (p >= nr_pages) > break; > > pages[p] = page; > } > s++; > } > > ? Yeah, was hesitating to kill it with the last version. Happy to do it for the next one. > > -- Steve