* [PATCH v3 0/3] pps: improve PREEMPT_RT performance @ 2026-04-09 15:26 Michael Byczkowski 2026-04-09 15:27 ` [PATCH v3 1/3] pps: pps-gpio: split IRQ handler into hardirq and threaded parts Michael Byczkowski 0 siblings, 1 reply; 5+ messages in thread From: Michael Byczkowski @ 2026-04-09 15:26 UTC (permalink / raw) To: Rodolfo Giometti; +Cc: Andrew Morton, linux-kernel Dear Rodolfo, This is v3 of the PPS PREEMPT_RT patchset, with the fix for the sleeping-in-atomic issue in patch 2/3 now squashed in. Changes since v2: - Patch 2/3: moved wake_up_interruptible_all() and kill_fasync() out of raw_spinlock section to avoid sleeping-in-atomic on PREEMPT_RT (reported by Nikolaus Buchwitz <nb@buchwitz.com>) Andrew Morton pointed me your way as PPS maintainer. I'm running a precision NTP time server on a Raspberry Pi 5 with a PREEMPT_RT kernel and a u-blox ZED-F9P GPS receiver providing PPS via GPIO. I found three issues in the PPS subsystem that cause unnecessary jitter under PREEMPT_RT, while being fully backward-compatible with non-RT kernels: 1. pps-gpio: The IRQ handler is force-threaded on PREEMPT_RT, so the PPS timestamp is captured after scheduling delay rather than at interrupt entry. Fix: split into a hardirq primary handler (captures timestamp only) and a threaded handler (processes the event). 2. pps_device.lock: spinlock_t becomes a sleeping mutex on PREEMPT_RT, allowing pps_event() to be preempted mid-update. Fix: convert to raw_spinlock_t and move sleeping calls out of the critical section. 3. pps_kc_hardpps_lock: Same issue as (2), in the kernel consumer path that calls hardpps(). Fix: convert to DEFINE_RAW_SPINLOCK. All three patches are tested on a Raspberry Pi 5 running a 7.0.0-rc6 PREEMPT_RT kernel. On non-RT kernels there is zero behavioral change. Signed-off-by: Michael Byczkowski <by@by-online.de> Tested-by: Michael Byczkowski <by@by-online.de> by (3): pps: pps-gpio: split IRQ handler into hardirq and threaded parts pps: convert pps_device lock to raw_spinlock for PREEMPT_RT pps: convert pps_kc_hardpps_lock to raw_spinlock for PREEMPT_RT drivers/pps/clients/pps-gpio.c | 37 +++++++++++++++++++++++----------- drivers/pps/kapi.c | 20 ++++++++++-------- drivers/pps/kc.c | 22 ++++++++++---------- drivers/pps/pps.c | 16 +++++++-------- include/linux/pps_kernel.h | 2 +- 5 files changed, 57 insertions(+), 40 deletions(-) -- 2.47.3 ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v3 1/3] pps: pps-gpio: split IRQ handler into hardirq and threaded parts 2026-04-09 15:26 [PATCH v3 0/3] pps: improve PREEMPT_RT performance Michael Byczkowski @ 2026-04-09 15:27 ` Michael Byczkowski 2026-04-09 15:28 ` [PATCH v3 2/3] pps: convert pps_device lock to raw_spinlock for PREEMPT_RT Michael Byczkowski 2026-04-09 18:52 ` [PATCH v3 1/3] pps: pps-gpio: split IRQ handler into hardirq and threaded parts David Laight 0 siblings, 2 replies; 5+ messages in thread From: Michael Byczkowski @ 2026-04-09 15:27 UTC (permalink / raw) To: Rodolfo Giometti; +Cc: Andrew Morton, linux-kernel On PREEMPT_RT, all IRQ handlers are force-threaded. The current pps_gpio_irq_handler captures the PPS timestamp via pps_get_ts() inside the handler, but on RT this runs in thread context — after a scheduling delay that adds variable latency (jitter) to the timestamp. Split the handler into a hardirq primary (pps_gpio_irq_hardirq) that only captures the timestamp, and a threaded handler (pps_gpio_irq_thread) that processes the event. With request_threaded_irq(), the primary handler runs in hardirq context even on PREEMPT_RT, preserving nanosecond timestamp precision. On non-RT kernels, request_threaded_irq with an explicit primary handler behaves identically to the previous request_irq call. Signed-off-by: Michael Byczkowski <by@by-online.de> Acked-by: Rodolfo Giometti <giometti@enneenne.com> Tested-by: Michael Byczkowski <by@by-online.de> Tested-by: Calvin Owens <calvin@wbinvd.org> --- drivers/pps/clients/pps-gpio.c | 37 +++++++++++++++++++++++----------- 1 file changed, 25 insertions(+), 12 deletions(-) diff --git a/drivers/pps/clients/pps-gpio.c b/drivers/pps/clients/pps-gpio.c index 935da68610c7..f37398fd6b10 100644 --- a/drivers/pps/clients/pps-gpio.c +++ b/drivers/pps/clients/pps-gpio.c @@ -35,33 +35,44 @@ struct pps_gpio_device_data { bool capture_clear; unsigned int echo_active_ms; /* PPS echo active duration */ unsigned long echo_timeout; /* timer timeout value in jiffies */ + struct pps_event_time ts; /* timestamp captured in hardirq */ }; /* * Report the PPS event */ -static irqreturn_t pps_gpio_irq_handler(int irq, void *data) +/* + * Primary hardirq handler — runs in hardirq context even on PREEMPT_RT. + * Only captures the timestamp; all other work is deferred to the thread. + */ +static irqreturn_t pps_gpio_irq_hardirq(int irq, void *data) { - const struct pps_gpio_device_data *info; - struct pps_event_time ts; - int rising_edge; + struct pps_gpio_device_data *info = data; + + pps_get_ts(&info->ts); - /* Get the time stamp first */ - pps_get_ts(&ts); + return IRQ_WAKE_THREAD; +} - info = data; +/* + * Threaded handler — processes the PPS event using the timestamp + * captured in hardirq context above. + */ +static irqreturn_t pps_gpio_irq_thread(int irq, void *data) +{ + struct pps_gpio_device_data *info = data; + int rising_edge; - /* Small trick to bypass the check on edge's direction when capture_clear is unset */ rising_edge = info->capture_clear ? gpiod_get_value(info->gpio_pin) : !info->assert_falling_edge; if ((rising_edge && !info->assert_falling_edge) || (!rising_edge && info->assert_falling_edge)) - pps_event(info->pps, &ts, PPS_CAPTUREASSERT, data); + pps_event(info->pps, &info->ts, PPS_CAPTUREASSERT, data); else if (info->capture_clear && ((rising_edge && info->assert_falling_edge) || (!rising_edge && !info->assert_falling_edge))) - pps_event(info->pps, &ts, PPS_CAPTURECLEAR, data); + pps_event(info->pps, &info->ts, PPS_CAPTURECLEAR, data); else dev_warn_ratelimited(&info->pps->dev, "IRQ did not trigger any PPS event\n"); @@ -210,8 +221,10 @@ static int pps_gpio_probe(struct platform_device *pdev) } /* register IRQ interrupt handler */ - ret = request_irq(data->irq, pps_gpio_irq_handler, - get_irqf_trigger_flags(data), data->info.name, data); + ret = request_threaded_irq(data->irq, + pps_gpio_irq_hardirq, pps_gpio_irq_thread, + get_irqf_trigger_flags(data) | IRQF_ONESHOT, + data->info.name, data); if (ret) { pps_unregister_source(data->pps); dev_err(dev, "failed to acquire IRQ %d\n", data->irq); -- 2.47.3 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v3 2/3] pps: convert pps_device lock to raw_spinlock for PREEMPT_RT 2026-04-09 15:27 ` [PATCH v3 1/3] pps: pps-gpio: split IRQ handler into hardirq and threaded parts Michael Byczkowski @ 2026-04-09 15:28 ` Michael Byczkowski 2026-04-09 15:29 ` [PATCH v3 3/3] pps: convert pps_kc_hardpps_lock " Michael Byczkowski 2026-04-09 18:52 ` [PATCH v3 1/3] pps: pps-gpio: split IRQ handler into hardirq and threaded parts David Laight 1 sibling, 1 reply; 5+ messages in thread From: Michael Byczkowski @ 2026-04-09 15:28 UTC (permalink / raw) To: Rodolfo Giometti; +Cc: Andrew Morton, linux-kernel On PREEMPT_RT, spinlock_t becomes a sleeping mutex, which allows pps_event() to be preempted mid-update by other RT threads. This introduces jitter in the PPS event recording path. Convert pps_device.lock to raw_spinlock_t so the timestamp and sequence number updates remain non-preemptible on RT. Move wake_up_interruptible_all() and kill_fasync() outside the raw_spinlock critical section, as these acquire sleeping locks on PREEMPT_RT. The lock now protects only the timestamp, sequence number, and last_ev updates. This is safe because PPS_FETCH waiters use wait_event_interruptible_timeout() and will re-check after waking, and kill_fasync() does not require PPS data to be locked (thanks to Nikolaus Buchwitz <nb@buchwitz.com> for reporting). On non-RT kernels, raw_spinlock_t compiles to identical code as spinlock_t — no behavioral change. Signed-off-by: Michael Byczkowski <by@by-online.de> Tested-by: Michael Byczkowski <by@by-online.de> --- drivers/pps/kapi.c | 20 ++++++++++++-------- drivers/pps/pps.c | 16 ++++++++-------- include/linux/pps_kernel.h | 2 +- 3 files changed, 21 insertions(+), 17 deletions(-) diff --git a/drivers/pps/kapi.c b/drivers/pps/kapi.c index 1bf0335a1b41..97898b0eeff4 100644 --- a/drivers/pps/kapi.c +++ b/drivers/pps/kapi.c @@ -102,7 +102,7 @@ struct pps_device *pps_register_source(struct pps_source_info *info, pps->info.echo = pps_echo_client_default; init_waitqueue_head(&pps->queue); - spin_lock_init(&pps->lock); + raw_spin_lock_init(&pps->lock); /* Create the char device */ err = pps_register_cdev(pps); @@ -167,7 +167,7 @@ void pps_event(struct pps_device *pps, struct pps_event_time *ts, int event, timespec_to_pps_ktime(&ts_real, ts->ts_real); - spin_lock_irqsave(&pps->lock, flags); + raw_spin_lock_irqsave(&pps->lock, flags); /* Must call the echo function? */ if ((pps->params.mode & (PPS_ECHOASSERT | PPS_ECHOCLEAR))) @@ -204,16 +204,20 @@ void pps_event(struct pps_device *pps, struct pps_event_time *ts, int event, captured = ~0; } - pps_kc_event(pps, ts, event); +pps_kc_event(pps, ts, event); - /* Wake up if captured something */ - if (captured) { + if (captured) pps->last_ev++; - wake_up_interruptible_all(&pps->queue); + raw_spin_unlock_irqrestore(&pps->lock, flags); + + /* + * Wake up after releasing the lock: wake_up_interruptible_all() + * and kill_fasync() acquire sleeping locks on PREEMPT_RT. + */ + if (captured) { + wake_up_interruptible_all(&pps->queue); kill_fasync(&pps->async_queue, SIGIO, POLL_IN); } - - spin_unlock_irqrestore(&pps->lock, flags); } EXPORT_SYMBOL(pps_event); diff --git a/drivers/pps/pps.c b/drivers/pps/pps.c index c6b8b6478276..ad96425208a1 100644 --- a/drivers/pps/pps.c +++ b/drivers/pps/pps.c @@ -103,12 +103,12 @@ static long pps_cdev_ioctl(struct file *file, case PPS_GETPARAMS: dev_dbg(&pps->dev, "PPS_GETPARAMS\n"); - spin_lock_irq(&pps->lock); + raw_spin_lock_irq(&pps->lock); /* Get the current parameters */ params = pps->params; - spin_unlock_irq(&pps->lock); + raw_spin_unlock_irq(&pps->lock); err = copy_to_user(uarg, ¶ms, sizeof(struct pps_kparams)); if (err) @@ -139,7 +139,7 @@ static long pps_cdev_ioctl(struct file *file, return -EINVAL; } - spin_lock_irq(&pps->lock); + raw_spin_lock_irq(&pps->lock); /* Save the new parameters */ pps->params = params; @@ -163,7 +163,7 @@ static long pps_cdev_ioctl(struct file *file, pps->params.assert_off_tu.flags = 0; pps->params.clear_off_tu.flags = 0; - spin_unlock_irq(&pps->lock); + raw_spin_unlock_irq(&pps->lock); break; @@ -190,7 +190,7 @@ static long pps_cdev_ioctl(struct file *file, return err; /* Return the fetched timestamp and save last fetched event */ - spin_lock_irq(&pps->lock); + raw_spin_lock_irq(&pps->lock); pps->last_fetched_ev = pps->last_ev; @@ -200,7 +200,7 @@ static long pps_cdev_ioctl(struct file *file, fdata.info.clear_tu = pps->clear_tu; fdata.info.current_mode = pps->current_mode; - spin_unlock_irq(&pps->lock); + raw_spin_unlock_irq(&pps->lock); err = copy_to_user(uarg, &fdata, sizeof(struct pps_fdata)); if (err) @@ -278,7 +278,7 @@ static long pps_cdev_compat_ioctl(struct file *file, return err; /* Return the fetched timestamp and save last fetched event */ - spin_lock_irq(&pps->lock); + raw_spin_lock_irq(&pps->lock); pps->last_fetched_ev = pps->last_ev; @@ -291,7 +291,7 @@ static long pps_cdev_compat_ioctl(struct file *file, memcpy(&compat.info.clear_tu, &pps->clear_tu, sizeof(struct pps_ktime_compat)); - spin_unlock_irq(&pps->lock); + raw_spin_unlock_irq(&pps->lock); return copy_to_user(uarg, &compat, sizeof(struct pps_fdata_compat)) ? -EFAULT : 0; diff --git a/include/linux/pps_kernel.h b/include/linux/pps_kernel.h index aab0aebb529e..f2fe504071ed 100644 --- a/include/linux/pps_kernel.h +++ b/include/linux/pps_kernel.h @@ -59,7 +59,7 @@ struct pps_device { void const *lookup_cookie; /* For pps_lookup_dev() only */ struct device dev; struct fasync_struct *async_queue; /* fasync method */ - spinlock_t lock; + raw_spinlock_t lock; }; /* -- 2.47.3 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v3 3/3] pps: convert pps_kc_hardpps_lock to raw_spinlock for PREEMPT_RT 2026-04-09 15:28 ` [PATCH v3 2/3] pps: convert pps_device lock to raw_spinlock for PREEMPT_RT Michael Byczkowski @ 2026-04-09 15:29 ` Michael Byczkowski 0 siblings, 0 replies; 5+ messages in thread From: Michael Byczkowski @ 2026-04-09 15:29 UTC (permalink / raw) To: Rodolfo Giometti; +Cc: Andrew Morton, linux-kernel Convert pps_kc_hardpps_lock from spinlock_t to raw_spinlock_t. This lock is held in pps_kc_event() which calls hardpps(). hardpps() takes tk_core.lock which is already a raw_spinlock — nesting a sleeping lock (PREEMPT_RT spinlock_t) over a raw_spinlock is invalid. The locked section only checks a pointer comparison and calls hardpps(), both of which are non-sleeping. On non-RT kernels, raw_spinlock_t compiles to identical code as spinlock_t — no behavioral change. Signed-off-by: Michael Byczkowski <by@by-online.de> Acked-by: Rodolfo Giometti <giometti@enneenne.com> Tested-by: Michael Byczkowski <by@by-online.de> Tested-by: Calvin Owens <calvin@wbinvd.org> --- drivers/pps/kc.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/pps/kc.c b/drivers/pps/kc.c index fbd23295afd7..9c6c024d5083 100644 --- a/drivers/pps/kc.c +++ b/drivers/pps/kc.c @@ -21,7 +21,7 @@ */ /* state variables to bind kernel consumer */ -static DEFINE_SPINLOCK(pps_kc_hardpps_lock); +static DEFINE_RAW_SPINLOCK(pps_kc_hardpps_lock); /* PPS API (RFC 2783): current source and mode for kernel consumer */ static struct pps_device *pps_kc_hardpps_dev; /* unique pointer to device */ static int pps_kc_hardpps_mode; /* mode bits for kernel consumer */ @@ -36,17 +36,17 @@ static int pps_kc_hardpps_mode; /* mode bits for kernel consumer */ int pps_kc_bind(struct pps_device *pps, struct pps_bind_args *bind_args) { /* Check if another consumer is already bound */ - spin_lock_irq(&pps_kc_hardpps_lock); + raw_spin_lock_irq(&pps_kc_hardpps_lock); if (bind_args->edge == 0) if (pps_kc_hardpps_dev == pps) { pps_kc_hardpps_mode = 0; pps_kc_hardpps_dev = NULL; - spin_unlock_irq(&pps_kc_hardpps_lock); + raw_spin_unlock_irq(&pps_kc_hardpps_lock); dev_info(&pps->dev, "unbound kernel" " consumer\n"); } else { - spin_unlock_irq(&pps_kc_hardpps_lock); + raw_spin_unlock_irq(&pps_kc_hardpps_lock); dev_err(&pps->dev, "selected kernel consumer" " is not bound\n"); return -EINVAL; @@ -56,11 +56,11 @@ int pps_kc_bind(struct pps_device *pps, struct pps_bind_args *bind_args) pps_kc_hardpps_dev == pps) { pps_kc_hardpps_mode = bind_args->edge; pps_kc_hardpps_dev = pps; - spin_unlock_irq(&pps_kc_hardpps_lock); + raw_spin_unlock_irq(&pps_kc_hardpps_lock); dev_info(&pps->dev, "bound kernel consumer: " "edge=0x%x\n", bind_args->edge); } else { - spin_unlock_irq(&pps_kc_hardpps_lock); + raw_spin_unlock_irq(&pps_kc_hardpps_lock); dev_err(&pps->dev, "another kernel consumer" " is already bound\n"); return -EINVAL; @@ -78,15 +78,15 @@ int pps_kc_bind(struct pps_device *pps, struct pps_bind_args *bind_args) */ void pps_kc_remove(struct pps_device *pps) { - spin_lock_irq(&pps_kc_hardpps_lock); + raw_spin_lock_irq(&pps_kc_hardpps_lock); if (pps == pps_kc_hardpps_dev) { pps_kc_hardpps_mode = 0; pps_kc_hardpps_dev = NULL; - spin_unlock_irq(&pps_kc_hardpps_lock); + raw_spin_unlock_irq(&pps_kc_hardpps_lock); dev_info(&pps->dev, "unbound kernel consumer" " on device removal\n"); } else - spin_unlock_irq(&pps_kc_hardpps_lock); + raw_spin_unlock_irq(&pps_kc_hardpps_lock); } /* pps_kc_event - call hardpps() on PPS event @@ -102,8 +102,8 @@ void pps_kc_event(struct pps_device *pps, struct pps_event_time *ts, unsigned long flags; /* Pass some events to kernel consumer if activated */ - spin_lock_irqsave(&pps_kc_hardpps_lock, flags); + raw_spin_lock_irqsave(&pps_kc_hardpps_lock, flags); if (pps == pps_kc_hardpps_dev && event & pps_kc_hardpps_mode) hardpps(&ts->ts_real, &ts->ts_raw); - spin_unlock_irqrestore(&pps_kc_hardpps_lock, flags); + raw_spin_unlock_irqrestore(&pps_kc_hardpps_lock, flags); } -- 2.47.3 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v3 1/3] pps: pps-gpio: split IRQ handler into hardirq and threaded parts 2026-04-09 15:27 ` [PATCH v3 1/3] pps: pps-gpio: split IRQ handler into hardirq and threaded parts Michael Byczkowski 2026-04-09 15:28 ` [PATCH v3 2/3] pps: convert pps_device lock to raw_spinlock for PREEMPT_RT Michael Byczkowski @ 2026-04-09 18:52 ` David Laight 1 sibling, 0 replies; 5+ messages in thread From: David Laight @ 2026-04-09 18:52 UTC (permalink / raw) To: Michael Byczkowski; +Cc: Rodolfo Giometti, Andrew Morton, linux-kernel On Thu, 9 Apr 2026 17:27:21 +0200 Michael Byczkowski <by@by-online.de> wrote: > On PREEMPT_RT, all IRQ handlers are force-threaded. The current > pps_gpio_irq_handler captures the PPS timestamp via pps_get_ts() > inside the handler, but on RT this runs in thread context — after > a scheduling delay that adds variable latency (jitter) to the > timestamp. > > Split the handler into a hardirq primary (pps_gpio_irq_hardirq) > that only captures the timestamp, and a threaded handler > (pps_gpio_irq_thread) that processes the event. With > request_threaded_irq(), the primary handler runs in hardirq context > even on PREEMPT_RT, preserving nanosecond timestamp precision. What happens if the threaded irq handler doesn't run until after the next (or more than one) hard irq? Threaded irq really don't work well for timer interrupts at all. They end up like buses - none come for ages and then they all arrive at once. David > > On non-RT kernels, request_threaded_irq with an explicit primary > handler behaves identically to the previous request_irq call. ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-04-09 18:52 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-04-09 15:26 [PATCH v3 0/3] pps: improve PREEMPT_RT performance Michael Byczkowski 2026-04-09 15:27 ` [PATCH v3 1/3] pps: pps-gpio: split IRQ handler into hardirq and threaded parts Michael Byczkowski 2026-04-09 15:28 ` [PATCH v3 2/3] pps: convert pps_device lock to raw_spinlock for PREEMPT_RT Michael Byczkowski 2026-04-09 15:29 ` [PATCH v3 3/3] pps: convert pps_kc_hardpps_lock " Michael Byczkowski 2026-04-09 18:52 ` [PATCH v3 1/3] pps: pps-gpio: split IRQ handler into hardirq and threaded parts David Laight
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox