From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74B4217BA2; Fri, 11 Oct 2024 14:52:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728658334; cv=none; b=j8q6/E0eGOHnXe7n/jP22wP4px0ZFl3PbM2MVRlwvhLCbIN2i1lwuBwX1bxV+5ENb9+iR3c+2rQKYesI828jW26OV3Y28GnH9dW68BfTHjgscT6qvL4rH9OktiHFaVfC9RjHh0Uae7e017fp7YTdCK4BRFJp2Lqkmh0kWFjQB/c= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728658334; c=relaxed/simple; bh=jWxMRywxKLspmxj8Bv+GBBNjHfb5fG8hz1FVCACo2o8=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=sgyXmwU/DccPyKG2pr8R4/gsEz7WDcRIvZDqY2VGctNHPt2r1kfH8m3jHu0uW4h45vQIodsDyxyUOT5VkWr1U1uc0U2RQQL8GdxXfe8L8sa1lJRgVzRLONwT0CopkJoJiX4px2Ud5+QogDmXTbsHwTcDu2G+P4Qrmguz82FZPw8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YO+fT6gm; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YO+fT6gm" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E91C2C4CEC3; Fri, 11 Oct 2024 14:52:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1728658334; bh=jWxMRywxKLspmxj8Bv+GBBNjHfb5fG8hz1FVCACo2o8=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=YO+fT6gmDKEDYedrrg4GwZj5TZ3Yo4tfD+vUeweWiql5yijc7wAFAKWA6wpAqfcWY eS7vyNeH92arJMI9L9/y01ueypOUv/TCf1JQRM1IYEwn2WyIP05DVaTbzojLhDJ0NX QIZWk4aEwf7ZcraWFgja4gxlpP9b5bExsxGpcldwJAwfO/5i4D/s7XxGBL2l3Bup4p WcrUp2H7wpi1UyVn8LflJprD2CGWxpBSw1ZWXC+hRScAy+d0Dju/xua3weL3cfOl4J Z9nafw3na0BOja8ohKk/aNIUdS6QO9/vj6dFba54YQEoxT19ps/YRRi8dKmuPrDLRR 6WcffIGAzws3g== From: Andreas Hindborg To: Boqun Feng , Lyude Paul Cc: Dirk Behme , Miguel Ojeda , Alex Gaynor , Anna-Maria Behnsen , Frederic Weisbecker , Thomas Gleixner , Gary Guo , =?utf-8?Q?Bj=C3=B6rn?= Roy Baron , Benno Lossin , Alice Ryhl , rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 00/14] hrtimer Rust API In-Reply-To: (Boqun Feng's message of "Tue, 1 Oct 2024 07:42:22 -0700") References: <20240917222739.1298275-1-a.hindborg@kernel.org> Date: Fri, 11 Oct 2024 16:52:01 +0200 Message-ID: <87a5falmjy.fsf@kernel.org> Precedence: bulk X-Mailing-List: rust-for-linux@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain Dirk, thanks for reporting! Boqun Feng writes: > On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote: >> On 18.09.2024 00:27, Andreas Hindborg wrote: >> > Hi! >> > >> > This series adds support for using the `hrtimer` subsystem from Rust code. >> > >> > I tried breaking up the code in some smaller patches, hopefully that will >> > ease the review process a bit. >> >> Just fyi, having all 14 patches applied I get [1] on the first (doctest) >> Example from hrtimer.rs. >> >> This is from lockdep: >> >> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785 >> >> Having just a quick look I'm not sure what the root cause is. Maybe mutex in >> interrupt context? Or a more subtle one? > > I think it's calling mutex inside an interrupt context as shown by the > callstack: > > ] __mutex_lock+0xa0/0xa4 > ] ... > ] hrtimer_interrupt+0x1d4/0x2ac > > , it is because: > > +//! struct ArcIntrusiveTimer { > +//! #[pin] > +//! timer: Timer, > +//! #[pin] > +//! flag: Mutex, > +//! #[pin] > +//! cond: CondVar, > +//! } > > has a Mutex, which actually should be a SpinLockIrq [1]. Note that > irq-off is needed for the lock, because otherwise we will hit a self > deadlock due to interrupts: > > spin_lock(&a); > > timer interrupt > spin_lock(&a); > > Also notice that the IrqDisabled<'_> token can be simply created by > ::new(), because irq contexts should guarantee interrupt disabled (i.e. > we don't support nested interrupts*). I updated the example based on the work in [1]. I think we need to update `CondVar::wait` to support waiting with irq disabled. Without this, when we get back from `bindings::schedule_timeout` in `CondVar::wait_internal`, interrupts are enabled: ```rust use kernel::{ hrtimer::{Timer, TimerCallback, TimerPointer, TimerRestart}, impl_has_timer, new_condvar, new_spinlock, new_spinlock_irq, irq::IrqDisabled, prelude::*, sync::{Arc, ArcBorrow, CondVar, SpinLock, SpinLockIrq}, time::Ktime, }; #[pin_data] struct ArcIntrusiveTimer { #[pin] timer: Timer, #[pin] flag: SpinLockIrq, #[pin] cond: CondVar, } impl ArcIntrusiveTimer { fn new() -> impl PinInit { try_pin_init!(Self { timer <- Timer::new(), flag <- new_spinlock_irq!(0), cond <- new_condvar!(), }) } } impl TimerCallback for ArcIntrusiveTimer { type CallbackTarget<'a> = Arc; type CallbackTargetParameter<'a> = ArcBorrow<'a, Self>; fn run(this: Self::CallbackTargetParameter<'_>, irq: IrqDisabled<'_>) -> TimerRestart { pr_info!("Timer called\n"); let mut guard = this.flag.lock_with(irq); *guard += 1; this.cond.notify_all(); if *guard == 5 { TimerRestart::NoRestart } else { TimerRestart::Restart } } } impl_has_timer! { impl HasTimer for ArcIntrusiveTimer { self.timer } } let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?; let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000)); kernel::irq::with_irqs_disabled(|irq| { let mut guard = has_timer.flag.lock_with(irq); while *guard != 5 { pr_info!("Not 5 yet, waiting\n"); has_timer.cond.wait(&mut guard); // <-- we arrive back here with interrupts enabled! } }); ``` I think an update of `CondVar::wait` should be part of the patch set [1]. Best regards, Andreas [1] https://lore.kernel.org/rust-for-linux/20240916213025.477225-1-lyude@redhat.com/