* [PATCH] perf_event: fix loss of notification with multi-event sampling
@ 2011-11-16 14:53 Stephane Eranian
2011-11-22 10:40 ` Peter Zijlstra
0 siblings, 1 reply; 11+ messages in thread
From: Stephane Eranian @ 2011-11-16 14:53 UTC (permalink / raw)
To: linux-kernel; +Cc: peterz, mingo
When you do:
$ perf record -e cycles,cycles,cycles noploop 10
You expect about 10,000 samples for each event, i.e., 10s at
1000samples/sec. However, this is not what's happening. You
get much fewer samples, maybe 3700 samples/event:
$ perf report -D | tail -15
Aggregated stats:
TOTAL events: 10998
MMAP events: 66
COMM events: 2
SAMPLE events: 10930
cycles stats:
TOTAL events: 3644
SAMPLE events: 3644
cycles stats:
TOTAL events: 3642
SAMPLE events: 3642
cycles stats:
TOTAL events: 3644
SAMPLE events: 3644
On a Intel Nehalem or even AMD64, there are 4 counters capable
of measuring cycles, so there is plenty of space to measure those
events without multiplexing (even with the NMI watchdog active).
And even with multiplexing, we'd expect roughly the same number
of samples per event.
The root of the problem was that when the event that caused the buffer
to become full was not the first event passed on the cmdline, the user
notification would get lost. The notification was sent to the file
descriptor of the overflowed event but the perf tool was not polling
on it. The perf tool aggregates all samples into a single buffer, i.e.,
the buffer of the first event. Consequently, it assumes notifications
for any event will come via that descriptor.
One can discuss the ABI on this particular problem. It would seem natural
that if a tool aggregates all samples into a single buffer, it would expect
notifications to come on the file descriptor to which the buffer is bound.
The alternative approach would be to still poll on all the descriptors,
despite the aggregation. Perf is written assuming the former which seems
reasonable. Therefore let's fix the kernel.
The patch fixes this problem by moving the waitq into the ring_buffer
so it can be shared between the 'aggregated' events. The buffer is removed
only when the last event is destroyed. Therefore, we don't need extra ref
counting. The notification only needs the waitq.
With the patch, we get:
$ perf report -D | tail -20
Aggregated stats:
TOTAL events: 30100
MMAP events: 80
COMM events: 2
EXIT events: 2
SAMPLE events: 30016
cycles stats:
TOTAL events: 10014
MMAP events: 5
COMM events: 1
EXIT events: 2
SAMPLE events: 10006
cycles stats:
TOTAL events: 10005
SAMPLE events: 10005
cycles stats:
TOTAL events: 10005
SAMPLE events: 10005
Signed-off-by: Stephane Eranian <eranian@google.com>
---
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 1e9ebe5..c9e2108 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -823,10 +823,11 @@ struct perf_event {
struct user_struct *mmap_user;
struct ring_buffer *rb;
- /* poll related */
- wait_queue_head_t waitq;
+ /* ring_buffer waitq pointer */
+ wait_queue_head_t *waitq;
struct fasync_struct *fasync;
+
/* delayed work for NMIs and such */
int pending_wakeup;
int pending_kill;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2e41c8e..e1dd7a0 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2754,6 +2754,7 @@ static void free_event(struct perf_event *event)
if (event->rb) {
ring_buffer_put(event->rb);
event->rb = NULL;
+ event->waitq = NULL;
}
if (is_cgroup_event(event))
@@ -2990,7 +2991,9 @@ static unsigned int perf_poll(struct file *file, poll_table *wait)
events = atomic_xchg(&rb->poll, 0);
rcu_read_unlock();
- poll_wait(file, &event->waitq, wait);
+ /* rb==NULL => waitq=NULL */
+ if (event->waitq)
+ poll_wait(file, event->waitq, wait);
return events;
}
@@ -3440,6 +3443,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = -ENOMEM;
goto unlock;
}
+ event->waitq = &rb->waitq;
rcu_assign_pointer(event->rb, rb);
atomic_long_add(user_extra, &user->locked_vm);
@@ -3494,7 +3498,8 @@ static const struct file_operations perf_fops = {
void perf_event_wakeup(struct perf_event *event)
{
- wake_up_all(&event->waitq);
+ if (event->waitq)
+ wake_up_all(event->waitq);
if (event->pending_kill) {
kill_fasync(&event->fasync, SIGIO, event->pending_kill);
@@ -5621,7 +5626,6 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
INIT_LIST_HEAD(&event->group_entry);
INIT_LIST_HEAD(&event->event_entry);
INIT_LIST_HEAD(&event->sibling_list);
- init_waitqueue_head(&event->waitq);
init_irq_work(&event->pending, perf_pending_event);
mutex_init(&event->mmap_mutex);
@@ -5826,6 +5830,7 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event)
}
old_rb = event->rb;
+ event->waitq = output_event->waitq;
rcu_assign_pointer(event->rb, rb);
ret = 0;
unlock:
diff --git a/kernel/events/internal.h b/kernel/events/internal.h
index be4a43f..2bf9546 100644
--- a/kernel/events/internal.h
+++ b/kernel/events/internal.h
@@ -19,6 +19,9 @@ struct ring_buffer {
atomic_t poll; /* POLL_ for wakeups */
+ /* poll related */
+ wait_queue_head_t waitq;
+
local_t head; /* write position */
local_t nest; /* nested writers */
local_t events; /* event limit */
diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index a2a2920..ef4ea75 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -209,6 +209,7 @@ ring_buffer_init(struct ring_buffer *rb, long watermark, int flags)
rb->writable = 1;
atomic_set(&rb->refcount, 1);
+ init_waitqueue_head(&rb->waitq);
}
#ifndef CONFIG_PERF_USE_VMALLOC
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [PATCH] perf_event: fix loss of notification with multi-event sampling
2011-11-16 14:53 [PATCH] perf_event: fix loss of notification with multi-event sampling Stephane Eranian
@ 2011-11-22 10:40 ` Peter Zijlstra
2011-11-22 13:15 ` Stephane Eranian
0 siblings, 1 reply; 11+ messages in thread
From: Peter Zijlstra @ 2011-11-22 10:40 UTC (permalink / raw)
To: Stephane Eranian; +Cc: linux-kernel, mingo
On Wed, 2011-11-16 at 15:53 +0100, Stephane Eranian wrote:
> + /* ring_buffer waitq pointer */
> + wait_queue_head_t *waitq;
Not a big issue, but is there a reason to keep this pointer instead of
always having to do:
rcu_read_lock();
rb = rcu_dereference(event->rb);
if (rb)
wake_up_all(rb->waitq);
rcu_read_unlock();
Hmm, looking at that there must be a reason we go through all the RCU
trouble for event->rb, assuming there is, your lack of rcu in say
perf_poll() could go funny.
/me ponders..
Ah, could it be a race of poll()/wakeup() vs perf_event_set_output() ?
Suppose you're a threaded proglet and either one cpu/thread has an
incoming event that does a wakeup, or one thread is stuck in poll()
whilst another thread does perf_event_set_output(), it could swizzle the
event->rb right out from under you.
Now, this is of course a somewhat silly thing to do.. but still it
shouldn't make things go *bang*.
Now the above wake_up_all() thing would work just fine, its just poll()
that I'm not sure how to fix.
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH] perf_event: fix loss of notification with multi-event sampling
2011-11-22 10:40 ` Peter Zijlstra
@ 2011-11-22 13:15 ` Stephane Eranian
2011-11-22 13:29 ` Peter Zijlstra
0 siblings, 1 reply; 11+ messages in thread
From: Stephane Eranian @ 2011-11-22 13:15 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: linux-kernel, mingo
On Tue, Nov 22, 2011 at 11:40 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Wed, 2011-11-16 at 15:53 +0100, Stephane Eranian wrote:
>> + /* ring_buffer waitq pointer */
>> + wait_queue_head_t *waitq;
>
> Not a big issue, but is there a reason to keep this pointer instead of
> always having to do:
>
> rcu_read_lock();
> rb = rcu_dereference(event->rb);
> if (rb)
> wake_up_all(rb->waitq);
> rcu_read_unlock();
>
> Hmm, looking at that there must be a reason we go through all the RCU
> trouble for event->rb, assuming there is, your lack of rcu in say
> perf_poll() could go funny.
>
I think we go drop waitq and go through event->rb each time we need to get
to the wait queue.
> /me ponders..
>
> Ah, could it be a race of poll()/wakeup() vs perf_event_set_output() ?
>
Are you saying that by dropping event->waitq in favor of event->rb->waitq
we make this problem disappear due to rcu protections?
Poll_wait() is a blocking call. It may wait on a stale waitq. But that problem
was probably already there. I am not clear as to what to do about that.
in perf_set_output() you would need to wakeup from poll_wait() and then
go back in with the new waitq.
Similarly, I am not clear as to what happens when you close an event for
which you have a waiter in poll_wait(). I assume you wakeup from it.
But I don't see where that's implemented.
> Suppose you're a threaded proglet and either one cpu/thread has an
> incoming event that does a wakeup, or one thread is stuck in poll()
> whilst another thread does perf_event_set_output(), it could swizzle the
> event->rb right out from under you.
>
> Now, this is of course a somewhat silly thing to do.. but still it
> shouldn't make things go *bang*.
>
> Now the above wake_up_all() thing would work just fine, its just poll()
> that I'm not sure how to fix.
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] perf_event: fix loss of notification with multi-event sampling
2011-11-22 13:15 ` Stephane Eranian
@ 2011-11-22 13:29 ` Peter Zijlstra
2011-11-22 14:13 ` Stephane Eranian
2011-11-23 9:30 ` Stephane Eranian
0 siblings, 2 replies; 11+ messages in thread
From: Peter Zijlstra @ 2011-11-22 13:29 UTC (permalink / raw)
To: Stephane Eranian; +Cc: linux-kernel, mingo
On Tue, 2011-11-22 at 14:15 +0100, Stephane Eranian wrote:
> > Ah, could it be a race of poll()/wakeup() vs perf_event_set_output() ?
> >
> Are you saying that by dropping event->waitq in favor of event->rb->waitq
> we make this problem disappear due to rcu protections?
Well, except..
> Poll_wait() is a blocking call. It may wait on a stale waitq. But that problem
> was probably already there. I am not clear as to what to do about that.
> in perf_set_output() you would need to wakeup from poll_wait() and then
> go back in with the new waitq.
Right, the whole blocking thing is a problem, and the whole poll()
interface always makes my head hurt.
If there was a go-sleep and wake-up side to poll we could do
ring_buffer_get()/put() and fix this problem, but I'm not finding a way
to make that happen quite yet.
> Similarly, I am not clear as to what happens when you close an event for
> which you have a waiter in poll_wait(). I assume you wakeup from it.
> But I don't see where that's implemented.
Good point, yes we should do that.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] perf_event: fix loss of notification with multi-event sampling
2011-11-22 13:29 ` Peter Zijlstra
@ 2011-11-22 14:13 ` Stephane Eranian
2011-11-22 14:28 ` Stephane Eranian
2011-11-23 9:30 ` Stephane Eranian
1 sibling, 1 reply; 11+ messages in thread
From: Stephane Eranian @ 2011-11-22 14:13 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: linux-kernel, mingo
On Tue, Nov 22, 2011 at 2:29 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Tue, 2011-11-22 at 14:15 +0100, Stephane Eranian wrote:
>
>> > Ah, could it be a race of poll()/wakeup() vs perf_event_set_output() ?
>> >
>> Are you saying that by dropping event->waitq in favor of event->rb->waitq
>> we make this problem disappear due to rcu protections?
>
> Well, except..
>
>> Poll_wait() is a blocking call. It may wait on a stale waitq. But that problem
>> was probably already there. I am not clear as to what to do about that.
>> in perf_set_output() you would need to wakeup from poll_wait() and then
>> go back in with the new waitq.
>
> Right, the whole blocking thing is a problem, and the whole poll()
> interface always makes my head hurt.
>
> If there was a go-sleep and wake-up side to poll we could do
> ring_buffer_get()/put() and fix this problem, but I'm not finding a way
> to make that happen quite yet.
>
>> Similarly, I am not clear as to what happens when you close an event for
>> which you have a waiter in poll_wait(). I assume you wakeup from it.
>> But I don't see where that's implemented.
>
> Good point, yes we should do that.
>
I looked at how this is done for regular files: eventpoll_release(file);
I think we need to have that call in free_event() or something like that.
I did verify that in a multi-threaded prog, you do get stuck in poll() if
one of the threads closes the event fd.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] perf_event: fix loss of notification with multi-event sampling
2011-11-22 14:13 ` Stephane Eranian
@ 2011-11-22 14:28 ` Stephane Eranian
2011-11-22 21:15 ` Peter Zijlstra
0 siblings, 1 reply; 11+ messages in thread
From: Stephane Eranian @ 2011-11-22 14:28 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: linux-kernel, mingo
On Tue, Nov 22, 2011 at 3:13 PM, Stephane Eranian <eranian@google.com> wrote:
> On Tue, Nov 22, 2011 at 2:29 PM, Peter Zijlstra <peterz@infradead.org> wrote:
>> On Tue, 2011-11-22 at 14:15 +0100, Stephane Eranian wrote:
>>
>>> > Ah, could it be a race of poll()/wakeup() vs perf_event_set_output() ?
>>> >
>>> Are you saying that by dropping event->waitq in favor of event->rb->waitq
>>> we make this problem disappear due to rcu protections?
>>
>> Well, except..
>>
>>> Poll_wait() is a blocking call. It may wait on a stale waitq. But that problem
>>> was probably already there. I am not clear as to what to do about that.
>>> in perf_set_output() you would need to wakeup from poll_wait() and then
>>> go back in with the new waitq.
>>
>> Right, the whole blocking thing is a problem, and the whole poll()
>> interface always makes my head hurt.
>>
>> If there was a go-sleep and wake-up side to poll we could do
>> ring_buffer_get()/put() and fix this problem, but I'm not finding a way
>> to make that happen quite yet.
>>
>>> Similarly, I am not clear as to what happens when you close an event for
>>> which you have a waiter in poll_wait(). I assume you wakeup from it.
>>> But I don't see where that's implemented.
>>
>> Good point, yes we should do that.
>>
> I looked at how this is done for regular files: eventpoll_release(file);
Well, but's that called as part of __fput(), which is called before
fops->release().
So it should do it, unless you have to set some more flags in poll(),
i.e., more than
POLLIN.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] perf_event: fix loss of notification with multi-event sampling
2011-11-22 14:28 ` Stephane Eranian
@ 2011-11-22 21:15 ` Peter Zijlstra
0 siblings, 0 replies; 11+ messages in thread
From: Peter Zijlstra @ 2011-11-22 21:15 UTC (permalink / raw)
To: Stephane Eranian; +Cc: linux-kernel, mingo
the only thing I can seem to come up with is something like the below
and uglier..
the below isn't compiled and I suspect there's still a race between poll
and perf_event_set_output() which we could plug by taking mmap_mutex in
perf_poll() but that'd be cheating..
also, yuck!
---
Index: linux-2.6/include/linux/perf_event.h
===================================================================
--- linux-2.6.orig/include/linux/perf_event.h
+++ linux-2.6/include/linux/perf_event.h
@@ -833,6 +833,7 @@ struct perf_event {
struct ring_buffer *rb;
/* poll related */
+ struct list_head rb_entry;
wait_queue_head_t waitq;
struct fasync_struct *fasync;
Index: linux-2.6/kernel/events/core.c
===================================================================
--- linux-2.6.orig/kernel/events/core.c
+++ linux-2.6/kernel/events/core.c
@@ -3014,8 +3014,10 @@ static unsigned int perf_poll(struct fil
rcu_read_lock();
rb = rcu_dereference(event->rb);
- if (rb)
+ if (rb) {
+ ring_buffer_attach(event, rb);
events = atomic_xchg(&rb->poll, 0);
+ }
rcu_read_unlock();
poll_wait(file, &event->waitq, wait);
@@ -3321,6 +3323,47 @@ static int perf_mmap_fault(struct vm_are
return ret;
}
+static void ring_buffer_attach(struct perf_event *event, struct ring_buffer *rb)
+{
+ unsigned long flags;
+
+ if (!list_empty(&event->rb_entry))
+ return;
+
+ spin_lock_irqsave(&rb->event_lock, flags);
+ if (!list_empty(&event->rb_entry))
+ goto unlock;
+
+ list_add(&event->rb_entry, rb->event_list);
+unlock:
+ spin_unlock_irqrestore(&rb->event_lock, flags);
+}
+
+static void ring_buffer_detach(struct perf_event *event, struct ring_buffer *rb)
+{
+ unsigned long flags;
+
+ if (list_empty(&event->rb_entry))
+ return;
+
+ spin_lock_irqsave(&rb->event_lock, flags);
+ list_del_init(&event->rb_entry);
+ wake_up_all(&event->waitq);
+ spin_unlock_irqrestore(&rb->event_lock, flags);
+}
+
+static void ring_buffer_wakeup(struct perf_event *event)
+{
+ struct ring_buffer *rb;
+
+ rcu_read_lock();
+ rb = rcu_dereference(event->rb);
+ list_for_each_entry_rcu(event, rb->event_list, rb_entry) {
+ wake_up_all(!event->waitq);
+ }
+ rcu_read_unlock();
+}
+
static void rb_free_rcu(struct rcu_head *rcu_head)
{
struct ring_buffer *rb;
@@ -3346,9 +3389,19 @@ static struct ring_buffer *ring_buffer_g
static void ring_buffer_put(struct ring_buffer *rb)
{
+ struct perf_event *event, *n;
+ unsigned long flags;
+
if (!atomic_dec_and_test(&rb->refcount))
return;
+ spin_lock_irqsave(&rb->event_lock, flags);
+ list_for_each_entry_safe(event, n, &rb->event_list, rb_entry) {
+ list_del_init(&event->rb_entry);
+ wake_up_all(&event->waitq);
+ }
+ spin_unlock_irqrestore(&rb->event_lock, flags);
+
call_rcu(&rb->rcu_head, rb_free_rcu);
}
@@ -3371,6 +3424,7 @@ static void perf_mmap_close(struct vm_ar
atomic_long_sub((size >> PAGE_SHIFT) + 1, &user->locked_vm);
vma->vm_mm->pinned_vm -= event->mmap_locked;
rcu_assign_pointer(event->rb, NULL);
+ ring_buffer_detach(event, rb);
mutex_unlock(&event->mmap_mutex);
ring_buffer_put(rb);
@@ -3527,7 +3581,7 @@ static const struct file_operations perf
void perf_event_wakeup(struct perf_event *event)
{
- wake_up_all(&event->waitq);
+ ring_buffer_wakeup(event);
if (event->pending_kill) {
kill_fasync(&event->fasync, SIGIO, event->pending_kill);
@@ -5882,6 +5936,8 @@ perf_event_set_output(struct perf_event
old_rb = event->rb;
rcu_assign_pointer(event->rb, rb);
+ if (old_rb)
+ ring_buffer_detach(event, old_rb);
ret = 0;
unlock:
mutex_unlock(&event->mmap_mutex);
Index: linux-2.6/kernel/events/internal.h
===================================================================
--- linux-2.6.orig/kernel/events/internal.h
+++ linux-2.6/kernel/events/internal.h
@@ -27,6 +27,10 @@ struct ring_buffer {
long watermark; /* wakeup watermark */
+ /* poll crap */
+ spinlock_t event_lock;
+ struct list_struct event_list;
+
struct perf_event_mmap_page *user_page;
void *data_pages[0];
};
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] perf_event: fix loss of notification with multi-event sampling
2011-11-22 13:29 ` Peter Zijlstra
2011-11-22 14:13 ` Stephane Eranian
@ 2011-11-23 9:30 ` Stephane Eranian
2011-11-23 10:06 ` Peter Zijlstra
1 sibling, 1 reply; 11+ messages in thread
From: Stephane Eranian @ 2011-11-23 9:30 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: linux-kernel, mingo
On Tue, Nov 22, 2011 at 2:29 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Tue, 2011-11-22 at 14:15 +0100, Stephane Eranian wrote:
>
>> > Ah, could it be a race of poll()/wakeup() vs perf_event_set_output() ?
>> >
>> Are you saying that by dropping event->waitq in favor of event->rb->waitq
>> we make this problem disappear due to rcu protections?
>
> Well, except..
>
>> Poll_wait() is a blocking call. It may wait on a stale waitq. But that problem
>> was probably already there. I am not clear as to what to do about that.
>> in perf_set_output() you would need to wakeup from poll_wait() and then
>> go back in with the new waitq.
>
> Right, the whole blocking thing is a problem, and the whole poll()
> interface always makes my head hurt.
>
> If there was a go-sleep and wake-up side to poll we could do
> ring_buffer_get()/put() and fix this problem, but I'm not finding a way
> to make that happen quite yet.
>
>> Similarly, I am not clear as to what happens when you close an event for
>> which you have a waiter in poll_wait(). I assume you wakeup from it.
>> But I don't see where that's implemented.
>
> Good point, yes we should do that.
>
So I did run some checks on the behavior of poll() in case a polled fd is
closed. It appears that you don't get out of it in all cases. It depends on
the nature of the file. I believe you get out of poll() ONLY in case of a
producer-consumer file descriptor pair, e.g., a pipe, named pipe, socket.
In other words you need to have a producer file descriptor. In case the
producer is closed, the reader waiting in poll() returns with a POLLHUP
flag. I did verify that for named pipes: close the writer, and the reader
comes out of poll.
In the case of perf_event, we are not in a producer-consumer model, so
it seems like the behavior we have now is correct. The caller of poll()
gets stuck if the file descriptor is closed.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] perf_event: fix loss of notification with multi-event sampling
2011-11-23 9:30 ` Stephane Eranian
@ 2011-11-23 10:06 ` Peter Zijlstra
2011-11-23 10:10 ` Stephane Eranian
0 siblings, 1 reply; 11+ messages in thread
From: Peter Zijlstra @ 2011-11-23 10:06 UTC (permalink / raw)
To: Stephane Eranian; +Cc: linux-kernel, mingo
On Wed, 2011-11-23 at 10:30 +0100, Stephane Eranian wrote:
>
> In the case of perf_event, we are not in a producer-consumer model, so
> it seems like the behavior we have now is correct. The caller of poll()
> gets stuck if the file descriptor is closed.
But wouldn't out event->waitq still be referenced by that waiting task?
Even though we freed it in our fops->release() callback?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] perf_event: fix loss of notification with multi-event sampling
2011-11-23 10:06 ` Peter Zijlstra
@ 2011-11-23 10:10 ` Stephane Eranian
2011-11-23 10:36 ` Peter Zijlstra
0 siblings, 1 reply; 11+ messages in thread
From: Stephane Eranian @ 2011-11-23 10:10 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: linux-kernel, mingo
On Wed, Nov 23, 2011 at 11:06 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Wed, 2011-11-23 at 10:30 +0100, Stephane Eranian wrote:
>>
>> In the case of perf_event, we are not in a producer-consumer model, so
>> it seems like the behavior we have now is correct. The caller of poll()
>> gets stuck if the file descriptor is closed.
>
> But wouldn't out event->waitq still be referenced by that waiting task?
> Even though we freed it in our fops->release() callback?
>
I suspect you don't end up in fops->release() if you have an ongoing poll
because you've probably increment the file's refcount along the way.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] perf_event: fix loss of notification with multi-event sampling
2011-11-23 10:10 ` Stephane Eranian
@ 2011-11-23 10:36 ` Peter Zijlstra
0 siblings, 0 replies; 11+ messages in thread
From: Peter Zijlstra @ 2011-11-23 10:36 UTC (permalink / raw)
To: Stephane Eranian; +Cc: linux-kernel, mingo
On Wed, 2011-11-23 at 11:10 +0100, Stephane Eranian wrote:
> On Wed, Nov 23, 2011 at 11:06 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> > On Wed, 2011-11-23 at 10:30 +0100, Stephane Eranian wrote:
> >>
> >> In the case of perf_event, we are not in a producer-consumer model, so
> >> it seems like the behavior we have now is correct. The caller of poll()
> >> gets stuck if the file descriptor is closed.
> >
> > But wouldn't out event->waitq still be referenced by that waiting task?
> > Even though we freed it in our fops->release() callback?
> >
> I suspect you don't end up in fops->release() if you have an ongoing poll
> because you've probably increment the file's refcount along the way.
You're quite right, poll does that internally. Ok, so we don't need to
worry about this bit then.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2011-11-23 10:36 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-16 14:53 [PATCH] perf_event: fix loss of notification with multi-event sampling Stephane Eranian
2011-11-22 10:40 ` Peter Zijlstra
2011-11-22 13:15 ` Stephane Eranian
2011-11-22 13:29 ` Peter Zijlstra
2011-11-22 14:13 ` Stephane Eranian
2011-11-22 14:28 ` Stephane Eranian
2011-11-22 21:15 ` Peter Zijlstra
2011-11-23 9:30 ` Stephane Eranian
2011-11-23 10:06 ` Peter Zijlstra
2011-11-23 10:10 ` Stephane Eranian
2011-11-23 10:36 ` Peter Zijlstra
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox