From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 435D1C433F5 for ; Fri, 10 Dec 2021 04:15:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date: Message-ID:References:Cc:To:From:Subject:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=8W0eDmFMglpm8WOfdqjxNLU98aU80Db2KXEir+KV0xA=; b=aESP7iXLmT5BkOSQ8u4oEq7UKB iTl35TbpNcRiWQa9QCuZVAeXkqan1bxvw14OAQ8O/OQjm8x0uuobJCvp0gdf0PrgYNKVw0tzphIjJ GifW8dngYT0IEUHgb6o02mzQEhoTNdfGHdVenNmJBiAlXnNr/+qmtnGBZ9t6n7Pje6+98RlTGzh8F xGIDkvW5VSTMBThuwvY9JHGbF4uGpOXuGGyak8WVEqLSCsvxXmtDFGIRJZfbMS4waVjwUCJUxDWtW iUzrORUu9gS8yhYSNvCX7zb/qMAjZBmbPfxDbHoa53uRsKACyoxj/DjHGcqV0/FEGHK+XPiAhhZ0C K6fVvpng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mvXIF-000m6F-ON; Fri, 10 Dec 2021 04:13:59 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mvXIA-000m5N-RP for linux-arm-kernel@lists.infradead.org; Fri, 10 Dec 2021 04:13:56 +0000 Received: from [192.168.254.32] (unknown [47.187.212.181]) by linux.microsoft.com (Postfix) with ESMTPSA id 9D14020B7179; Thu, 9 Dec 2021 20:13:51 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 9D14020B7179 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1639109632; bh=2+9DWElxFs0oeoukK3yGHPhLFd0ho+qne9SrdFClZa0=; h=Subject:From:To:Cc:References:Date:In-Reply-To:From; b=m45rWvuNhtfDvkGtS91DT5soxAnSy5X+dx426kjPW0TfiKtrKUol2vq8lVxck7G+e PgeFA+Gvm45k3U6Ea5TC7lqvv/azWkdD4T9MHVsdcsT1zGexLplDhj25fsmOm3GHN7 RIkxs47zkKiEmiLBwmt5Msm6AVIUD/2XOVfqlwFc= Subject: Re: [PATCH v11 1/5] arm64: Call stack_backtrace() only from within walk_stackframe() From: "Madhavan T. Venkataraman" To: Mark Rutland Cc: broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org References: <8b861784d85a21a9bf08598938c11aff1b1249b9> <20211123193723.12112-1-madvenka@linux.microsoft.com> <20211123193723.12112-2-madvenka@linux.microsoft.com> <1dffea0a-fd99-ccd6-625f-c5e573186741@linux.microsoft.com> Message-ID: Date: Thu, 9 Dec 2021 22:13:50 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <1dffea0a-fd99-ccd6-625f-c5e573186741@linux.microsoft.com> Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211209_201354_999385_06F15266 X-CRM114-Status: GOOD ( 26.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hey Mark, Do you have any comments on the rest of the series? I am working on the next version of the patchset. If you have any other comments, I will wait. Thanks. Madhavan On 11/30/21 2:29 PM, Madhavan T. Venkataraman wrote: > > > On 11/30/21 12:29 PM, Mark Rutland wrote: >> On Tue, Nov 30, 2021 at 11:13:28AM -0600, Madhavan T. Venkataraman wrote: >>> On 11/30/21 9:05 AM, Mark Rutland wrote: >>>> On Tue, Nov 23, 2021 at 01:37:19PM -0600, madvenka@linux.microsoft.com wrote: >>>>> From: "Madhavan T. Venkataraman" >>>>> >>>>> Currently, arch_stack_walk() calls start_backtrace() and walk_stackframe() >>>>> separately. There is no need to do that. Instead, call start_backtrace() >>>>> from within walk_stackframe(). In other words, walk_stackframe() is the only >>>>> unwind function a consumer needs to call. >> >>>>> @@ -143,15 +140,19 @@ static int notrace unwind_frame(struct task_struct *tsk, >>>>> NOKPROBE_SYMBOL(unwind_frame); >>>>> >>>>> static void notrace walk_stackframe(struct task_struct *tsk, >>>>> - struct stackframe *frame, >>>>> + unsigned long fp, unsigned long pc, >>>>> bool (*fn)(void *, unsigned long), void *data) >>>>> { >>>>> + struct stackframe frame; >>>>> + >>>>> + start_backtrace(&frame, fp, pc); >>>>> + >>>>> while (1) { >>>>> int ret; >>>>> >>>>> - if (!fn(data, frame->pc)) >>>>> + if (!fn(data, frame.pc)) >>>>> break; >>>>> - ret = unwind_frame(tsk, frame); >>>>> + ret = unwind_frame(tsk, &frame); >>>>> if (ret < 0) >>>>> break; >>>>> } >>>>> @@ -195,17 +196,19 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, >>>>> void *cookie, struct task_struct *task, >>>>> struct pt_regs *regs) >>>>> { >>>>> - struct stackframe frame; >>>>> - >>>>> - if (regs) >>>>> - start_backtrace(&frame, regs->regs[29], regs->pc); >>>>> - else if (task == current) >>>>> - start_backtrace(&frame, >>>>> - (unsigned long)__builtin_frame_address(1), >>>>> - (unsigned long)__builtin_return_address(0)); >>>>> - else >>>>> - start_backtrace(&frame, thread_saved_fp(task), >>>>> - thread_saved_pc(task)); >>>>> - >>>>> - walk_stackframe(task, &frame, consume_entry, cookie); >>>>> + unsigned long fp, pc; >>>>> + >>>>> + if (regs) { >>>>> + fp = regs->regs[29]; >>>>> + pc = regs->pc; >>>>> + } else if (task == current) { >>>>> + /* Skip arch_stack_walk() in the stack trace. */ >>>>> + fp = (unsigned long)__builtin_frame_address(1); >>>>> + pc = (unsigned long)__builtin_return_address(0); >>>>> + } else { >>>>> + /* Caller guarantees that the task is not running. */ >>>>> + fp = thread_saved_fp(task); >>>>> + pc = thread_saved_pc(task); >>>>> + } >>>>> + walk_stackframe(task, fp, pc, consume_entry, cookie); >>>> >>>> I'd prefer to leave this as-is. The new and old structure are largely >>>> equivalent, so we haven't made this any simpler, but we have added more >>>> arguments to walk_stackframe(). >>>> >>> >>> This is just to simplify things when we eventually add arch_stack_walk_reliable(). >>> That is all. All of the unwinding is done by a single unwinding function and >>> there are two consumers of that unwinding function - arch_stack_walk() and >>> arch_stack_walk_reliable(). >> >> I understand the theory, but I don't think that moving the start_backtrace() >> call actually simplifies this in a meaningful way, and I think it'll make it >> harder for us to make more meaningful simplifications later on. >> >> As of patch 4 of this series, we'll have: >> >> | noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, >> | void *cookie, struct task_struct *task, >> | struct pt_regs *regs) >> | { >> | unsigned long fp, pc; >> | >> | if (regs) { >> | fp = regs->regs[29]; >> | pc = regs->pc; >> | } else if (task == current) { >> | /* Skip arch_stack_walk() in the stack trace. */ >> | fp = (unsigned long)__builtin_frame_address(1); >> | pc = (unsigned long)__builtin_return_address(0); >> | } else { >> | /* Caller guarantees that the task is not running. */ >> | fp = thread_saved_fp(task); >> | pc = thread_saved_pc(task); >> | } >> | walk_stackframe(task, fp, pc, consume_entry, cookie); >> | } >> | >> | noinline int notrace arch_stack_walk_reliable(stack_trace_consume_fn consume_fn, >> | void *cookie, >> | struct task_struct *task) >> | { >> | unsigned long fp, pc; >> | >> | if (task == current) { >> | /* Skip arch_stack_walk_reliable() in the stack trace. */ >> | fp = (unsigned long)__builtin_frame_address(1); >> | pc = (unsigned long)__builtin_return_address(0); >> | } else { >> | /* Caller guarantees that the task is not running. */ >> | fp = thread_saved_fp(task); >> | pc = thread_saved_pc(task); >> | } >> | if (unwind(task, fp, pc, consume_fn, cookie)) >> | return 0; >> | return -EINVAL; >> | } >> >> Which I do not think is substantially simpler than the naive extrapolation from >> what we currently have, e.g. >> >> | noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, >> | void *cookie, struct task_struct *task, >> | struct pt_regs *regs) >> | { >> | struct stackframe frame; >> | >> | if (regs) { >> | unwind_init(&frame, regs->regs[29], regs->pc) >> | } else if (task == current) { >> | unwind_init(&frame, __builtin_frame_address(1), >> | __builtin_return_address(0)); >> | } else { >> | unwind_init(&frame, thread_saved_fp(task), >> | thread_saved_pc(task); >> | } >> | walk_stackframe(task, &frame, consume_entry, cookie); >> | } >> | >> | noinline int notrace arch_stack_walk_reliable(stack_trace_consume_fn consume_fn, >> | void *cookie, >> | struct task_struct *task) >> | { >> | struct stackframe frame; >> | >> | if (task == current) { >> | unwind_init(&frame, __builtin_frame_address(1), >> | __builtin_return_address(0)); >> | } else { >> | unwind_init(&frame, thread_saved_fp(task), >> | thread_saved_pc(task); >> | } >> | if (unwind(task, &frame, consume_fn, cookie)) >> | return 0; >> | return -EINVAL; >> | } >> >> Further, I think we can factor this in a different way to reduce the >> duplication: >> >> | /* >> | * TODO: document requirements here >> | */ >> | static inline void unwind_init_from_current_regs(struct stackframe *frame, >> | struct pt_regs *regs) >> | { >> | unwind_init(frame, regs->regs[29], regs->pc); >> | } >> | >> | /* >> | * TODO: document requirements here >> | */ >> | static inline void unwind_init_from_blocked_task(struct stackframe *frame, >> | struct task_struct *tsk) >> | { >> | unwind_init(&frame, thread_saved_fp(task), >> | thread_saved_pc(task)); >> | } >> | >> | /* >> | * TODO: document requirements here >> | * >> | * Note: this is always inlined, and we expect our caller to be a noinline >> | * function, such that this starts from our caller's caller. >> | */ >> | static __always_inline void unwind_init_from_caller(struct stackframe *frame) >> | { >> | unwind_init(frame, __builtin_frame_address(1), >> | __builtin_return_address(0)); >> | } >> | >> | noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, >> | void *cookie, struct task_struct *task, >> | struct pt_regs *regs) >> | { >> | struct stackframe frame; >> | >> | if (regs) >> | unwind_init_current_regs(&frame, regs); >> | else if (task == current) >> | unwind_init_from_caller(&frame); >> | else >> | unwind_init_blocked_task(&frame, task); >> | >> | unwind(task, &frame, consume_entry, cookie); >> | } >> | >> | noinline int notrace arch_stack_walk_reliable(stack_trace_consume_fn consume_fn, >> | void *cookie, >> | struct task_struct *task) >> | { >> | struct stackframe frame; >> | >> | if (task == current) >> | unwind_init_from_caller(&frame); >> | else >> | unwind_init_from_blocked_task(&frame, task); >> | >> | if (unwind(task, &frame, consume_fn, cookie)) >> | return 0; >> | return -EINVAL; >> | } >> >> ... which minimizes the duplication and allows us to add specialized >> initialization for each case if necessary, which I believe we will need in >> future to make unwinding across exception boundaries (such as when starting >> with regs) more useful. >> >> Thanks, >> Mark. >> > > OK. I don't mind doing it this way. > > Thanks. > > Madhavan > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel