From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
stable@vger.kernel.org,
"Steven Rostedt (VMware)" <rostedt@goodmis.org>,
"Peter Zijlstra (Intel)" <peterz@infradead.org>,
Clark Williams <williams@redhat.com>,
Daniel Bristot de Oliveira <bristot@redhat.com>,
John Kacur <jkacur@redhat.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
Mike Galbraith <efault@gmx.de>, Scott Wood <swood@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@kernel.org>
Subject: [PATCH 4.4 65/96] sched/rt: Simplify the IPI based RT balancing logic
Date: Tue, 28 Nov 2017 11:23:14 +0100 [thread overview]
Message-ID: <20171128100507.086117109@linuxfoundation.org> (raw)
In-Reply-To: <20171128100503.067621614@linuxfoundation.org>
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steven Rostedt (Red Hat) <rostedt@goodmis.org>
commit 4bdced5c9a2922521e325896a7bbbf0132c94e56 upstream.
When a CPU lowers its priority (schedules out a high priority task for a
lower priority one), a check is made to see if any other CPU has overloaded
RT tasks (more than one). It checks the rto_mask to determine this and if so
it will request to pull one of those tasks to itself if the non running RT
task is of higher priority than the new priority of the next task to run on
the current CPU.
When we deal with large number of CPUs, the original pull logic suffered
from large lock contention on a single CPU run queue, which caused a huge
latency across all CPUs. This was caused by only having one CPU having
overloaded RT tasks and a bunch of other CPUs lowering their priority. To
solve this issue, commit:
b6366f048e0c ("sched/rt: Use IPI to trigger RT task push migration instead of pulling")
changed the way to request a pull. Instead of grabbing the lock of the
overloaded CPU's runqueue, it simply sent an IPI to that CPU to do the work.
Although the IPI logic worked very well in removing the large latency build
up, it still could suffer from a large number of IPIs being sent to a single
CPU. On a 80 CPU box, I measured over 200us of processing IPIs. Worse yet,
when I tested this on a 120 CPU box, with a stress test that had lots of
RT tasks scheduling on all CPUs, it actually triggered the hard lockup
detector! One CPU had so many IPIs sent to it, and due to the restart
mechanism that is triggered when the source run queue has a priority status
change, the CPU spent minutes! processing the IPIs.
Thinking about this further, I realized there's no reason for each run queue
to send its own IPI. As all CPUs with overloaded tasks must be scanned
regardless if there's one or many CPUs lowering their priority, because
there's no current way to find the CPU with the highest priority task that
can schedule to one of these CPUs, there really only needs to be one IPI
being sent around at a time.
This greatly simplifies the code!
The new approach is to have each root domain have its own irq work, as the
rto_mask is per root domain. The root domain has the following fields
attached to it:
rto_push_work - the irq work to process each CPU set in rto_mask
rto_lock - the lock to protect some of the other rto fields
rto_loop_start - an atomic that keeps contention down on rto_lock
the first CPU scheduling in a lower priority task
is the one to kick off the process.
rto_loop_next - an atomic that gets incremented for each CPU that
schedules in a lower priority task.
rto_loop - a variable protected by rto_lock that is used to
compare against rto_loop_next
rto_cpu - The cpu to send the next IPI to, also protected by
the rto_lock.
When a CPU schedules in a lower priority task and wants to make sure
overloaded CPUs know about it. It increments the rto_loop_next. Then it
atomically sets rto_loop_start with a cmpxchg. If the old value is not "0",
then it is done, as another CPU is kicking off the IPI loop. If the old
value is "0", then it will take the rto_lock to synchronize with a possible
IPI being sent around to the overloaded CPUs.
If rto_cpu is greater than or equal to nr_cpu_ids, then there's either no
IPI being sent around, or one is about to finish. Then rto_cpu is set to the
first CPU in rto_mask and an IPI is sent to that CPU. If there's no CPUs set
in rto_mask, then there's nothing to be done.
When the CPU receives the IPI, it will first try to push any RT tasks that is
queued on the CPU but can't run because a higher priority RT task is
currently running on that CPU.
Then it takes the rto_lock and looks for the next CPU in the rto_mask. If it
finds one, it simply sends an IPI to that CPU and the process continues.
If there's no more CPUs in the rto_mask, then rto_loop is compared with
rto_loop_next. If they match, everything is done and the process is over. If
they do not match, then a CPU scheduled in a lower priority task as the IPI
was being passed around, and the process needs to start again. The first CPU
in rto_mask is sent the IPI.
This change removes this duplication of work in the IPI logic, and greatly
lowers the latency caused by the IPIs. This removed the lockup happening on
the 120 CPU machine. It also simplifies the code tremendously. What else
could anyone ask for?
Thanks to Peter Zijlstra for simplifying the rto_loop_start atomic logic and
supplying me with the rto_start_trylock() and rto_start_unlock() helper
functions.
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott Wood <swood@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170424114732.1aac6dc4@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
kernel/sched/core.c | 6 +
kernel/sched/rt.c | 235 ++++++++++++++++++++++++---------------------------
kernel/sched/sched.h | 24 +++--
3 files changed, 138 insertions(+), 127 deletions(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5907,6 +5907,12 @@ static int init_rootdomain(struct root_d
if (!zalloc_cpumask_var(&rd->rto_mask, GFP_KERNEL))
goto free_dlo_mask;
+#ifdef HAVE_RT_PUSH_IPI
+ rd->rto_cpu = -1;
+ raw_spin_lock_init(&rd->rto_lock);
+ init_irq_work(&rd->rto_push_work, rto_push_irq_work_func);
+#endif
+
init_dl_bw(&rd->dl_bw);
if (cpudl_init(&rd->cpudl) != 0)
goto free_dlo_mask;
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -64,10 +64,6 @@ static void start_rt_bandwidth(struct rt
raw_spin_unlock(&rt_b->rt_runtime_lock);
}
-#if defined(CONFIG_SMP) && defined(HAVE_RT_PUSH_IPI)
-static void push_irq_work_func(struct irq_work *work);
-#endif
-
void init_rt_rq(struct rt_rq *rt_rq)
{
struct rt_prio_array *array;
@@ -87,13 +83,6 @@ void init_rt_rq(struct rt_rq *rt_rq)
rt_rq->rt_nr_migratory = 0;
rt_rq->overloaded = 0;
plist_head_init(&rt_rq->pushable_tasks);
-
-#ifdef HAVE_RT_PUSH_IPI
- rt_rq->push_flags = 0;
- rt_rq->push_cpu = nr_cpu_ids;
- raw_spin_lock_init(&rt_rq->push_lock);
- init_irq_work(&rt_rq->push_work, push_irq_work_func);
-#endif
#endif /* CONFIG_SMP */
/* We start is dequeued state, because no RT tasks are queued */
rt_rq->rt_queued = 0;
@@ -1802,160 +1791,166 @@ static void push_rt_tasks(struct rq *rq)
}
#ifdef HAVE_RT_PUSH_IPI
+
/*
- * The search for the next cpu always starts at rq->cpu and ends
- * when we reach rq->cpu again. It will never return rq->cpu.
- * This returns the next cpu to check, or nr_cpu_ids if the loop
- * is complete.
+ * When a high priority task schedules out from a CPU and a lower priority
+ * task is scheduled in, a check is made to see if there's any RT tasks
+ * on other CPUs that are waiting to run because a higher priority RT task
+ * is currently running on its CPU. In this case, the CPU with multiple RT
+ * tasks queued on it (overloaded) needs to be notified that a CPU has opened
+ * up that may be able to run one of its non-running queued RT tasks.
+ *
+ * All CPUs with overloaded RT tasks need to be notified as there is currently
+ * no way to know which of these CPUs have the highest priority task waiting
+ * to run. Instead of trying to take a spinlock on each of these CPUs,
+ * which has shown to cause large latency when done on machines with many
+ * CPUs, sending an IPI to the CPUs to have them push off the overloaded
+ * RT tasks waiting to run.
+ *
+ * Just sending an IPI to each of the CPUs is also an issue, as on large
+ * count CPU machines, this can cause an IPI storm on a CPU, especially
+ * if its the only CPU with multiple RT tasks queued, and a large number
+ * of CPUs scheduling a lower priority task at the same time.
+ *
+ * Each root domain has its own irq work function that can iterate over
+ * all CPUs with RT overloaded tasks. Since all CPUs with overloaded RT
+ * tassk must be checked if there's one or many CPUs that are lowering
+ * their priority, there's a single irq work iterator that will try to
+ * push off RT tasks that are waiting to run.
+ *
+ * When a CPU schedules a lower priority task, it will kick off the
+ * irq work iterator that will jump to each CPU with overloaded RT tasks.
+ * As it only takes the first CPU that schedules a lower priority task
+ * to start the process, the rto_start variable is incremented and if
+ * the atomic result is one, then that CPU will try to take the rto_lock.
+ * This prevents high contention on the lock as the process handles all
+ * CPUs scheduling lower priority tasks.
+ *
+ * All CPUs that are scheduling a lower priority task will increment the
+ * rt_loop_next variable. This will make sure that the irq work iterator
+ * checks all RT overloaded CPUs whenever a CPU schedules a new lower
+ * priority task, even if the iterator is in the middle of a scan. Incrementing
+ * the rt_loop_next will cause the iterator to perform another scan.
*
- * rq->rt.push_cpu holds the last cpu returned by this function,
- * or if this is the first instance, it must hold rq->cpu.
*/
static int rto_next_cpu(struct rq *rq)
{
- int prev_cpu = rq->rt.push_cpu;
+ struct root_domain *rd = rq->rd;
+ int next;
int cpu;
- cpu = cpumask_next(prev_cpu, rq->rd->rto_mask);
-
/*
- * If the previous cpu is less than the rq's CPU, then it already
- * passed the end of the mask, and has started from the beginning.
- * We end if the next CPU is greater or equal to rq's CPU.
+ * When starting the IPI RT pushing, the rto_cpu is set to -1,
+ * rt_next_cpu() will simply return the first CPU found in
+ * the rto_mask.
+ *
+ * If rto_next_cpu() is called with rto_cpu is a valid cpu, it
+ * will return the next CPU found in the rto_mask.
+ *
+ * If there are no more CPUs left in the rto_mask, then a check is made
+ * against rto_loop and rto_loop_next. rto_loop is only updated with
+ * the rto_lock held, but any CPU may increment the rto_loop_next
+ * without any locking.
*/
- if (prev_cpu < rq->cpu) {
- if (cpu >= rq->cpu)
- return nr_cpu_ids;
+ for (;;) {
- } else if (cpu >= nr_cpu_ids) {
- /*
- * We passed the end of the mask, start at the beginning.
- * If the result is greater or equal to the rq's CPU, then
- * the loop is finished.
- */
- cpu = cpumask_first(rq->rd->rto_mask);
- if (cpu >= rq->cpu)
- return nr_cpu_ids;
- }
- rq->rt.push_cpu = cpu;
+ /* When rto_cpu is -1 this acts like cpumask_first() */
+ cpu = cpumask_next(rd->rto_cpu, rd->rto_mask);
- /* Return cpu to let the caller know if the loop is finished or not */
- return cpu;
-}
+ rd->rto_cpu = cpu;
-static int find_next_push_cpu(struct rq *rq)
-{
- struct rq *next_rq;
- int cpu;
+ if (cpu < nr_cpu_ids)
+ return cpu;
- while (1) {
- cpu = rto_next_cpu(rq);
- if (cpu >= nr_cpu_ids)
- break;
- next_rq = cpu_rq(cpu);
+ rd->rto_cpu = -1;
+
+ /*
+ * ACQUIRE ensures we see the @rto_mask changes
+ * made prior to the @next value observed.
+ *
+ * Matches WMB in rt_set_overload().
+ */
+ next = atomic_read_acquire(&rd->rto_loop_next);
- /* Make sure the next rq can push to this rq */
- if (next_rq->rt.highest_prio.next < rq->rt.highest_prio.curr)
+ if (rd->rto_loop == next)
break;
+
+ rd->rto_loop = next;
}
- return cpu;
+ return -1;
}
-#define RT_PUSH_IPI_EXECUTING 1
-#define RT_PUSH_IPI_RESTART 2
+static inline bool rto_start_trylock(atomic_t *v)
+{
+ return !atomic_cmpxchg_acquire(v, 0, 1);
+}
-static void tell_cpu_to_push(struct rq *rq)
+static inline void rto_start_unlock(atomic_t *v)
{
- int cpu;
+ atomic_set_release(v, 0);
+}
- if (rq->rt.push_flags & RT_PUSH_IPI_EXECUTING) {
- raw_spin_lock(&rq->rt.push_lock);
- /* Make sure it's still executing */
- if (rq->rt.push_flags & RT_PUSH_IPI_EXECUTING) {
- /*
- * Tell the IPI to restart the loop as things have
- * changed since it started.
- */
- rq->rt.push_flags |= RT_PUSH_IPI_RESTART;
- raw_spin_unlock(&rq->rt.push_lock);
- return;
- }
- raw_spin_unlock(&rq->rt.push_lock);
- }
+static void tell_cpu_to_push(struct rq *rq)
+{
+ int cpu = -1;
- /* When here, there's no IPI going around */
+ /* Keep the loop going if the IPI is currently active */
+ atomic_inc(&rq->rd->rto_loop_next);
- rq->rt.push_cpu = rq->cpu;
- cpu = find_next_push_cpu(rq);
- if (cpu >= nr_cpu_ids)
+ /* Only one CPU can initiate a loop at a time */
+ if (!rto_start_trylock(&rq->rd->rto_loop_start))
return;
- rq->rt.push_flags = RT_PUSH_IPI_EXECUTING;
+ raw_spin_lock(&rq->rd->rto_lock);
+
+ /*
+ * The rto_cpu is updated under the lock, if it has a valid cpu
+ * then the IPI is still running and will continue due to the
+ * update to loop_next, and nothing needs to be done here.
+ * Otherwise it is finishing up and an ipi needs to be sent.
+ */
+ if (rq->rd->rto_cpu < 0)
+ cpu = rto_next_cpu(rq);
+
+ raw_spin_unlock(&rq->rd->rto_lock);
- irq_work_queue_on(&rq->rt.push_work, cpu);
+ rto_start_unlock(&rq->rd->rto_loop_start);
+
+ if (cpu >= 0)
+ irq_work_queue_on(&rq->rd->rto_push_work, cpu);
}
/* Called from hardirq context */
-static void try_to_push_tasks(void *arg)
+void rto_push_irq_work_func(struct irq_work *work)
{
- struct rt_rq *rt_rq = arg;
- struct rq *rq, *src_rq;
- int this_cpu;
+ struct rq *rq;
int cpu;
- this_cpu = rt_rq->push_cpu;
+ rq = this_rq();
- /* Paranoid check */
- BUG_ON(this_cpu != smp_processor_id());
-
- rq = cpu_rq(this_cpu);
- src_rq = rq_of_rt_rq(rt_rq);
-
-again:
+ /*
+ * We do not need to grab the lock to check for has_pushable_tasks.
+ * When it gets updated, a check is made if a push is possible.
+ */
if (has_pushable_tasks(rq)) {
raw_spin_lock(&rq->lock);
- push_rt_task(rq);
+ push_rt_tasks(rq);
raw_spin_unlock(&rq->lock);
}
- /* Pass the IPI to the next rt overloaded queue */
- raw_spin_lock(&rt_rq->push_lock);
- /*
- * If the source queue changed since the IPI went out,
- * we need to restart the search from that CPU again.
- */
- if (rt_rq->push_flags & RT_PUSH_IPI_RESTART) {
- rt_rq->push_flags &= ~RT_PUSH_IPI_RESTART;
- rt_rq->push_cpu = src_rq->cpu;
- }
+ raw_spin_lock(&rq->rd->rto_lock);
- cpu = find_next_push_cpu(src_rq);
+ /* Pass the IPI to the next rt overloaded queue */
+ cpu = rto_next_cpu(rq);
- if (cpu >= nr_cpu_ids)
- rt_rq->push_flags &= ~RT_PUSH_IPI_EXECUTING;
- raw_spin_unlock(&rt_rq->push_lock);
+ raw_spin_unlock(&rq->rd->rto_lock);
- if (cpu >= nr_cpu_ids)
+ if (cpu < 0)
return;
- /*
- * It is possible that a restart caused this CPU to be
- * chosen again. Don't bother with an IPI, just see if we
- * have more to push.
- */
- if (unlikely(cpu == rq->cpu))
- goto again;
-
/* Try the next RT overloaded CPU */
- irq_work_queue_on(&rt_rq->push_work, cpu);
-}
-
-static void push_irq_work_func(struct irq_work *work)
-{
- struct rt_rq *rt_rq = container_of(work, struct rt_rq, push_work);
-
- try_to_push_tasks(rt_rq);
+ irq_work_queue_on(&rq->rd->rto_push_work, cpu);
}
#endif /* HAVE_RT_PUSH_IPI */
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -429,7 +429,7 @@ static inline int rt_bandwidth_enabled(v
}
/* RT IPI pull logic requires IRQ_WORK */
-#ifdef CONFIG_IRQ_WORK
+#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_SMP)
# define HAVE_RT_PUSH_IPI
#endif
@@ -450,12 +450,6 @@ struct rt_rq {
unsigned long rt_nr_total;
int overloaded;
struct plist_head pushable_tasks;
-#ifdef HAVE_RT_PUSH_IPI
- int push_flags;
- int push_cpu;
- struct irq_work push_work;
- raw_spinlock_t push_lock;
-#endif
#endif /* CONFIG_SMP */
int rt_queued;
@@ -537,6 +531,19 @@ struct root_domain {
struct dl_bw dl_bw;
struct cpudl cpudl;
+#ifdef HAVE_RT_PUSH_IPI
+ /*
+ * For IPI pull requests, loop across the rto_mask.
+ */
+ struct irq_work rto_push_work;
+ raw_spinlock_t rto_lock;
+ /* These are only updated and read within rto_lock */
+ int rto_loop;
+ int rto_cpu;
+ /* These atomics are updated outside of a lock */
+ atomic_t rto_loop_next;
+ atomic_t rto_loop_start;
+#endif
/*
* The "RT overload" flag: it gets set if a CPU has more than
* one runnable RT task.
@@ -547,6 +554,9 @@ struct root_domain {
extern struct root_domain def_root_domain;
+#ifdef HAVE_RT_PUSH_IPI
+extern void rto_push_irq_work_func(struct irq_work *work);
+#endif
#endif /* CONFIG_SMP */
/*
next prev parent reply other threads:[~2017-11-28 10:30 UTC|newest]
Thread overview: 124+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-11-28 10:22 [PATCH 4.4 00/96] 4.4.103-stable review Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 02/96] s390/runtime instrumention: fix possible memory corruption Greg Kroah-Hartman
2017-12-05 17:02 ` Ben Hutchings
2017-12-05 17:08 ` Greg Kroah-Hartman
2017-12-05 18:15 ` Heiko Carstens
2017-12-06 7:44 ` Greg Kroah-Hartman
2017-12-06 13:30 ` Heiko Carstens
2017-12-06 13:53 ` Greg Kroah-Hartman
2017-12-06 13:31 ` Heiko Carstens
2017-11-28 10:22 ` [PATCH 4.4 03/96] s390/disassembler: add missing end marker for e7 table Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 04/96] s390/disassembler: increase show_code buffer size Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 05/96] ipv6: only call ip6_route_dev_notify() once for NETDEV_UNREGISTER Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 06/96] AF_VSOCK: Shrink the area influenced by prepare_to_wait Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 07/96] vsock: use new wait API for vsock_stream_sendmsg() Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 08/96] sched: Make resched_cpu() unconditional Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 09/96] lib/mpi: call cond_resched() from mpi_powm() loop Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 10/96] x86/decoder: Add new TEST instruction pattern Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 11/96] ARM: 8722/1: mm: make STRICT_KERNEL_RWX effective for LPAE Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 12/96] ARM: 8721/1: mm: dump: check hardware RO bit " Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 13/96] MIPS: ralink: Fix MT7628 pinmux Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 14/96] MIPS: ralink: Fix typo in mt7628 pinmux function Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 15/96] ALSA: hda: Add Raven PCI ID Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 16/96] dm bufio: fix integer overflow when limiting maximum cache size Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 17/96] dm: fix race between dm_get_from_kobject() and __dm_destroy() Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 18/96] MIPS: Fix an n32 core file generation regset support regression Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 20/96] autofs: dont fail mount for transient error Greg Kroah-Hartman
2017-12-05 21:59 ` Ben Hutchings
2017-12-05 22:22 ` NeilBrown
2017-11-28 10:22 ` [PATCH 4.4 21/96] nilfs2: fix race condition that causes file system corruption Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 22/96] eCryptfs: use after free in ecryptfs_release_messaging() Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 23/96] bcache: check ca->alloc_thread initialized before wake up it Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 24/96] isofs: fix timestamps beyond 2027 Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 25/96] NFS: Fix typo in nomigration mount option Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 26/96] nfs: Fix ugly referral attributes Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 27/96] nfsd: deal with revoked delegations appropriately Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 28/96] rtlwifi: rtl8192ee: Fix memory leak when loading firmware Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 29/96] rtlwifi: fix uninitialized rtlhal->last_suspend_sec time Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 30/96] ata: fixes kernel crash while tracing ata_eh_link_autopsy event Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 31/96] ext4: fix interaction between i_size, fallocate, and delalloc after a crash Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 32/96] ALSA: pcm: update tstamp only if audio_tstamp changed Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 33/96] ALSA: usb-audio: Add sanity checks to FE parser Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 34/96] ALSA: usb-audio: Fix potential out-of-bound access at parsing SU Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 35/96] ALSA: usb-audio: Add sanity checks in v2 clock parsers Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 36/96] ALSA: timer: Remove kernel warning at compat ioctl error paths Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 37/96] ALSA: hda/realtek - Fix ALC700 family no sound issue Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 38/96] fix a page leak in vhost_scsi_iov_to_sgl() error recovery Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 39/96] fs/9p: Compare qid.path in v9fs_test_inode Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 40/96] iscsi-target: Fix non-immediate TMR reference leak Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 41/96] target: Fix QUEUE_FULL + SCSI task attribute handling Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 42/96] KVM: nVMX: set IDTR and GDTR limits when loading L1 host state Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 44/96] SUNRPC: Fix tracepoint storage issues with svc_recv and svc_rqst_status Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 45/96] clk: ti: dra7-atl-clock: Fix of_node reference counting Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 46/96] clk: ti: dra7-atl-clock: fix child-node lookups Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 47/96] libnvdimm, namespace: fix label initialization to use valid seq numbers Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 48/96] libnvdimm, namespace: make resource attribute only readable by root Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 49/96] IB/srpt: Do not accept invalid initiator port names Greg Kroah-Hartman
2017-11-28 10:22 ` [PATCH 4.4 50/96] IB/srp: Avoid that a cable pull can trigger a kernel crash Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 51/96] NFC: fix device-allocation error return Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 52/96] i40e: Use smp_rmb rather than read_barrier_depends Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 53/96] igb: " Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 54/96] igbvf: " Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 55/96] ixgbevf: " Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 56/96] i40evf: " Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 57/96] fm10k: " Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 58/96] ixgbe: Fix skb list corruption on Power systems Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 59/96] parisc: Fix validity check of pointer size argument in new CAS implementation Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 60/96] powerpc/signal: Properly handle return value from uprobe_deny_signal() Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 61/96] media: Dont do DMA on stack for firmware upload in the AS102 driver Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 62/96] media: rc: check for integer overflow Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 63/96] [media] cx231xx-cards: fix NULL-deref on missing association descriptor Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 64/96] media: v4l2-ctrl: Fix flags field on Control events Greg Kroah-Hartman
2017-11-28 10:23 ` Greg Kroah-Hartman [this message]
2017-11-28 10:23 ` [PATCH 4.4 66/96] fscrypt: lock mutex before checking for bounce page pool Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 67/96] net/9p: Switch to wait_event_killable() Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 68/96] PM / OPP: Add missing of_node_put(np) Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 69/96] e1000e: Fix error path in link detection Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 70/96] e1000e: Fix return value test Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 71/96] e1000e: Separate signaling for link check/link up Greg Kroah-Hartman
2017-12-07 20:02 ` Ben Hutchings
2017-12-08 8:34 ` Benjamin Poirier
2017-12-08 10:45 ` Christian Hesse
2017-12-11 7:26 ` [PATCH] e1000e: Fix e1000_check_for_copper_link_ich8lan return value Benjamin Poirier
2017-12-21 6:57 ` [Intel-wired-lan] " Neftin, Sasha
2017-12-22 22:00 ` Brown, Aaron F
2017-11-28 10:23 ` [PATCH 4.4 72/96] RDS: RDMA: return appropriate error on rdma map failures Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 73/96] PCI: Apply _HPX settings only to relevant devices Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 74/96] dmaengine: zx: set DMA_CYCLIC cap_mask bit Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 75/96] net: Allow IP_MULTICAST_IF to set index to L3 slave Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 76/96] net: 3com: typhoon: typhoon_init_one: make return values more specific Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 77/96] net: 3com: typhoon: typhoon_init_one: fix incorrect return values Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 78/96] drm/armada: Fix compile fail Greg Kroah-Hartman
2017-12-07 20:16 ` Ben Hutchings
2017-12-09 5:39 ` alexander.levin
2017-11-28 10:23 ` [PATCH 4.4 79/96] ath10k: fix incorrect txpower set by P2P_DEVICE interface Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 80/96] ath10k: ignore configuring the incorrect board_id Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 81/96] ath10k: fix potential memory leak in ath10k_wmi_tlv_op_pull_fw_stats() Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 82/96] ath10k: set CTS protection VDEV param only if VDEV is up Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 83/96] ALSA: hda - Apply ALC269_FIXUP_NO_SHUTUP on HDA_FIXUP_ACT_PROBE Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 84/96] drm: Apply range restriction after color adjustment when allocation Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 85/96] mac80211: Remove invalid flag operations in mesh TSF synchronization Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 86/96] mac80211: Suppress NEW_PEER_CANDIDATE event if no room Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 87/96] iio: light: fix improper return value Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 88/96] staging: iio: cdc: " Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 89/96] spi: SPI_FSL_DSPI should depend on HAS_DMA Greg Kroah-Hartman
2017-12-07 20:41 ` Ben Hutchings
2017-12-09 5:40 ` alexander.levin
2017-11-28 10:23 ` [PATCH 4.4 90/96] netfilter: nft_queue: use raw_smp_processor_id() Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 91/96] netfilter: nf_tables: fix oob access Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 92/96] ASoC: rsnd: dont double free kctrl Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 93/96] btrfs: return the actual error value from from btrfs_uuid_tree_iterate Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 94/96] ASoC: wm_adsp: Dont overrun firmware file buffer when reading region data Greg Kroah-Hartman
2017-11-28 10:23 ` [PATCH 4.4 95/96] s390/kbuild: enable modversions for symbols exported from asm Greg Kroah-Hartman
2017-12-07 21:21 ` Ben Hutchings
2017-12-09 5:40 ` alexander.levin
2017-11-28 10:23 ` [PATCH 4.4 96/96] xen: xenbus driver must not accept invalid transaction ids Greg Kroah-Hartman
2017-11-28 17:26 ` [PATCH 4.4 00/96] 4.4.103-stable review Naresh Kamboju
2017-11-29 8:07 ` Greg Kroah-Hartman
2017-11-29 9:53 ` Naresh Kamboju
2017-11-29 10:36 ` Greg Kroah-Hartman
2017-11-29 13:51 ` Naresh Kamboju
2017-11-28 18:27 ` Nathan Chancellor
2017-11-29 8:07 ` Greg Kroah-Hartman
2017-11-28 19:46 ` Shuah Khan
2017-11-28 21:51 ` Guenter Roeck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171128100507.086117109@linuxfoundation.org \
--to=gregkh@linuxfoundation.org \
--cc=bristot@redhat.com \
--cc=efault@gmx.de \
--cc=jkacur@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=stable@vger.kernel.org \
--cc=swood@redhat.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=williams@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).