linux-input.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] Input: evdev - drop redundant list-locking
@ 2014-07-20 18:48 David Herrmann
  2014-07-20 18:54 ` Dmitry Torokhov
  2014-07-20 20:03 ` [PATCH v2] " David Herrmann
  0 siblings, 2 replies; 5+ messages in thread
From: David Herrmann @ 2014-07-20 18:48 UTC (permalink / raw)
  To: linux-input; +Cc: Dmitry Torokhov, Henrik Rydberg, David Herrmann

evdev->client_list is rcu-protected. There is no need to have a
separate spinlock just for the list. Either one is good enough, so lets
drop the spinlock.

Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
---
Hi

I stumbled across this one when doing some evdev reviews. Maybe I'm missing
something obvious and I should stop coding on Sundays. But the RCU-protection
should be enough here, right?

We _could_ do a synchronize_rcu() during evdev_attach_client() to guarantee that
new events are really delivered once it returns. But that seems rather pedantic
to me.

I'm also not sure why we use RCU here, anyway. I mean, there's no high
contention so a spinlock should be fine and would get rid of the very expensive
synchronize_rcu during close(). But then again, I don't really care enough to
write benchmarks for that..

Thanks
David

 drivers/input/evdev.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/drivers/input/evdev.c b/drivers/input/evdev.c
index 7a25a7a..1f38bd1 100644
--- a/drivers/input/evdev.c
+++ b/drivers/input/evdev.c
@@ -34,7 +34,6 @@ struct evdev {
 	wait_queue_head_t wait;
 	struct evdev_client __rcu *grab;
 	struct list_head client_list;
-	spinlock_t client_lock; /* protects client_list */
 	struct mutex mutex;
 	struct device dev;
 	struct cdev cdev;
@@ -433,17 +432,13 @@ static int evdev_ungrab(struct evdev *evdev, struct evdev_client *client)
 static void evdev_attach_client(struct evdev *evdev,
 				struct evdev_client *client)
 {
-	spin_lock(&evdev->client_lock);
 	list_add_tail_rcu(&client->node, &evdev->client_list);
-	spin_unlock(&evdev->client_lock);
 }
 
 static void evdev_detach_client(struct evdev *evdev,
 				struct evdev_client *client)
 {
-	spin_lock(&evdev->client_lock);
 	list_del_rcu(&client->node);
-	spin_unlock(&evdev->client_lock);
 	synchronize_rcu();
 }
 
@@ -485,10 +480,10 @@ static void evdev_hangup(struct evdev *evdev)
 {
 	struct evdev_client *client;
 
-	spin_lock(&evdev->client_lock);
-	list_for_each_entry(client, &evdev->client_list, node)
+	rcu_read_lock();
+	list_for_each_entry_rcu(client, &evdev->client_list, node)
 		kill_fasync(&client->fasync, SIGIO, POLL_HUP);
-	spin_unlock(&evdev->client_lock);
+	rcu_read_unlock();
 
 	wake_up_interruptible(&evdev->wait);
 }
@@ -1438,7 +1433,6 @@ static int evdev_connect(struct input_handler *handler, struct input_dev *dev,
 	}
 
 	INIT_LIST_HEAD(&evdev->client_list);
-	spin_lock_init(&evdev->client_lock);
 	mutex_init(&evdev->mutex);
 	init_waitqueue_head(&evdev->wait);
 	evdev->exist = true;
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] Input: evdev - drop redundant list-locking
  2014-07-20 18:48 [RFC PATCH] Input: evdev - drop redundant list-locking David Herrmann
@ 2014-07-20 18:54 ` Dmitry Torokhov
  2014-07-20 19:00   ` David Herrmann
  2014-07-20 20:03 ` [PATCH v2] " David Herrmann
  1 sibling, 1 reply; 5+ messages in thread
From: Dmitry Torokhov @ 2014-07-20 18:54 UTC (permalink / raw)
  To: David Herrmann; +Cc: linux-input, Henrik Rydberg

On Sun, Jul 20, 2014 at 08:48:12PM +0200, David Herrmann wrote:
> evdev->client_list is rcu-protected. There is no need to have a
> separate spinlock just for the list. Either one is good enough, so lets
> drop the spinlock.
> 
> Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
> ---
> Hi
> 
> I stumbled across this one when doing some evdev reviews. Maybe I'm missing
> something obvious and I should stop coding on Sundays. But the RCU-protection
> should be enough here, right?

RCU protection is for traversing list only, writes (as is adding and removing
elements from client_list) still have to be mutually exclusive.

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] Input: evdev - drop redundant list-locking
  2014-07-20 18:54 ` Dmitry Torokhov
@ 2014-07-20 19:00   ` David Herrmann
  2014-07-20 19:53     ` Dmitry Torokhov
  0 siblings, 1 reply; 5+ messages in thread
From: David Herrmann @ 2014-07-20 19:00 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: open list:HID CORE LAYER, Henrik Rydberg

Hi

On Sun, Jul 20, 2014 at 8:54 PM, Dmitry Torokhov
<dmitry.torokhov@gmail.com> wrote:
> On Sun, Jul 20, 2014 at 08:48:12PM +0200, David Herrmann wrote:
>> evdev->client_list is rcu-protected. There is no need to have a
>> separate spinlock just for the list. Either one is good enough, so lets
>> drop the spinlock.
>>
>> Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
>> ---
>> Hi
>>
>> I stumbled across this one when doing some evdev reviews. Maybe I'm missing
>> something obvious and I should stop coding on Sundays. But the RCU-protection
>> should be enough here, right?
>
> RCU protection is for traversing list only, writes (as is adding and removing
> elements from client_list) still have to be mutually exclusive.

Oh, you mean to protect against concurrent writes? Right, but we could
just use evdev->mutex for that. I mean all paths that call
attach_client() or detach_client() already lock evdev->mutex at some
point. It would allow us to get rid of the lock.

Thanks
David

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] Input: evdev - drop redundant list-locking
  2014-07-20 19:00   ` David Herrmann
@ 2014-07-20 19:53     ` Dmitry Torokhov
  0 siblings, 0 replies; 5+ messages in thread
From: Dmitry Torokhov @ 2014-07-20 19:53 UTC (permalink / raw)
  To: David Herrmann; +Cc: open list:HID CORE LAYER, Henrik Rydberg

On Sun, Jul 20, 2014 at 09:00:02PM +0200, David Herrmann wrote:
> Hi
> 
> On Sun, Jul 20, 2014 at 8:54 PM, Dmitry Torokhov
> <dmitry.torokhov@gmail.com> wrote:
> > On Sun, Jul 20, 2014 at 08:48:12PM +0200, David Herrmann wrote:
> >> evdev->client_list is rcu-protected. There is no need to have a
> >> separate spinlock just for the list. Either one is good enough, so lets
> >> drop the spinlock.
> >>
> >> Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
> >> ---
> >> Hi
> >>
> >> I stumbled across this one when doing some evdev reviews. Maybe I'm missing
> >> something obvious and I should stop coding on Sundays. But the RCU-protection
> >> should be enough here, right?
> >
> > RCU protection is for traversing list only, writes (as is adding and removing
> > elements from client_list) still have to be mutually exclusive.
> 
> Oh, you mean to protect against concurrent writes? Right, but we could
> just use evdev->mutex for that. I mean all paths that call
> attach_client() or detach_client() already lock evdev->mutex at some
> point. It would allow us to get rid of the lock.

Right, we probably could do it by pulling taking/releasing evdev->mutex into
evdev_pen and evdev_release.

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2] Input: evdev - drop redundant list-locking
  2014-07-20 18:48 [RFC PATCH] Input: evdev - drop redundant list-locking David Herrmann
  2014-07-20 18:54 ` Dmitry Torokhov
@ 2014-07-20 20:03 ` David Herrmann
  1 sibling, 0 replies; 5+ messages in thread
From: David Herrmann @ 2014-07-20 20:03 UTC (permalink / raw)
  To: linux-input; +Cc: Dmitry Torokhov, Henrik Rydberg, David Herrmann

evdev->client_list is rcu-protected. We need the client_lock only to
protect against concurrent writes. However, all paths that access
client_list already lock evdev->mutex. Therefore, drop client_lock and use
evdev->mutex as list-protection.

This also drops several helper functions that are called only once. Most
of them are fairly trivial so there's little reason to extract them. This
is needed to get better control over evdev->mutex locking.

Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
---
v2:
  - use evdev->mutex to protect against concurrent writes
  - inline most of the helper functions

 drivers/input/evdev.c | 126 ++++++++++++++++----------------------------------
 1 file changed, 40 insertions(+), 86 deletions(-)

diff --git a/drivers/input/evdev.c b/drivers/input/evdev.c
index 7a25a7a..406acde 100644
--- a/drivers/input/evdev.c
+++ b/drivers/input/evdev.c
@@ -34,7 +34,6 @@ struct evdev {
 	wait_queue_head_t wait;
 	struct evdev_client __rcu *grab;
 	struct list_head client_list;
-	spinlock_t client_lock; /* protects client_list */
 	struct mutex mutex;
 	struct device dev;
 	struct cdev cdev;
@@ -430,69 +429,6 @@ static int evdev_ungrab(struct evdev *evdev, struct evdev_client *client)
 	return 0;
 }
 
-static void evdev_attach_client(struct evdev *evdev,
-				struct evdev_client *client)
-{
-	spin_lock(&evdev->client_lock);
-	list_add_tail_rcu(&client->node, &evdev->client_list);
-	spin_unlock(&evdev->client_lock);
-}
-
-static void evdev_detach_client(struct evdev *evdev,
-				struct evdev_client *client)
-{
-	spin_lock(&evdev->client_lock);
-	list_del_rcu(&client->node);
-	spin_unlock(&evdev->client_lock);
-	synchronize_rcu();
-}
-
-static int evdev_open_device(struct evdev *evdev)
-{
-	int retval;
-
-	retval = mutex_lock_interruptible(&evdev->mutex);
-	if (retval)
-		return retval;
-
-	if (!evdev->exist)
-		retval = -ENODEV;
-	else if (!evdev->open++) {
-		retval = input_open_device(&evdev->handle);
-		if (retval)
-			evdev->open--;
-	}
-
-	mutex_unlock(&evdev->mutex);
-	return retval;
-}
-
-static void evdev_close_device(struct evdev *evdev)
-{
-	mutex_lock(&evdev->mutex);
-
-	if (evdev->exist && !--evdev->open)
-		input_close_device(&evdev->handle);
-
-	mutex_unlock(&evdev->mutex);
-}
-
-/*
- * Wake up users waiting for IO so they can disconnect from
- * dead device.
- */
-static void evdev_hangup(struct evdev *evdev)
-{
-	struct evdev_client *client;
-
-	spin_lock(&evdev->client_lock);
-	list_for_each_entry(client, &evdev->client_list, node)
-		kill_fasync(&client->fasync, SIGIO, POLL_HUP);
-	spin_unlock(&evdev->client_lock);
-
-	wake_up_interruptible(&evdev->wait);
-}
-
 static int evdev_release(struct inode *inode, struct file *file)
 {
 	struct evdev_client *client = file->private_data;
@@ -501,9 +437,12 @@ static int evdev_release(struct inode *inode, struct file *file)
 
 	mutex_lock(&evdev->mutex);
 	evdev_ungrab(evdev, client);
+	list_del_rcu(&client->node);
+	if (evdev->exist && !--evdev->open)
+		input_close_device(&evdev->handle);
 	mutex_unlock(&evdev->mutex);
 
-	evdev_detach_client(evdev, client);
+	synchronize_rcu();
 
 	for (i = 0; i < EV_CNT; ++i)
 		kfree(client->evmasks[i]);
@@ -513,8 +452,6 @@ static int evdev_release(struct inode *inode, struct file *file)
 	else
 		kfree(client);
 
-	evdev_close_device(evdev);
-
 	return 0;
 }
 
@@ -545,19 +482,38 @@ static int evdev_open(struct inode *inode, struct file *file)
 	client->bufsize = bufsize;
 	spin_lock_init(&client->buffer_lock);
 	client->evdev = evdev;
-	evdev_attach_client(evdev, client);
 
-	error = evdev_open_device(evdev);
+	error = mutex_lock_interruptible(&evdev->mutex);
 	if (error)
-		goto err_free_client;
+		goto err_free;
+
+	list_add_tail_rcu(&client->node, &evdev->client_list);
+
+	if (!evdev->exist) {
+		error = -ENODEV;
+		goto err_detach;
+	}
+
+	if (!evdev->open++) {
+		error = input_open_device(&evdev->handle);
+		if (error) {
+			evdev->open--;
+			goto err_detach;
+		}
+	}
+
+	mutex_unlock(&evdev->mutex);
 
 	file->private_data = client;
 	nonseekable_open(inode, file);
 
 	return 0;
 
- err_free_client:
-	evdev_detach_client(evdev, client);
+err_detach:
+	list_del_rcu(&client->node);
+	mutex_unlock(&evdev->mutex);
+	synchronize_rcu();
+err_free:
 	kfree(client);
 	return error;
 }
@@ -1384,24 +1340,23 @@ static const struct file_operations evdev_fops = {
 	.llseek		= no_llseek,
 };
 
-/*
- * Mark device non-existent. This disables writes, ioctls and
- * prevents new users from opening the device. Already posted
- * blocking reads will stay, however new ones will fail.
- */
-static void evdev_mark_dead(struct evdev *evdev)
+static void evdev_cleanup(struct evdev *evdev)
 {
+	struct input_handle *handle = &evdev->handle;
+	struct evdev_client *client;
+
+	/*
+	 * Mark device non-existent to disable writes, ioctls and new users.
+	 * Then wake up running users that wait for I/O so they can disconnect
+	 * from the dead device.
+	 */
 	mutex_lock(&evdev->mutex);
 	evdev->exist = false;
+	list_for_each_entry(client, &evdev->client_list, node)
+		kill_fasync(&client->fasync, SIGIO, POLL_HUP);
 	mutex_unlock(&evdev->mutex);
-}
 
-static void evdev_cleanup(struct evdev *evdev)
-{
-	struct input_handle *handle = &evdev->handle;
-
-	evdev_mark_dead(evdev);
-	evdev_hangup(evdev);
+	wake_up_interruptible(&evdev->wait);
 
 	cdev_del(&evdev->cdev);
 
@@ -1438,7 +1393,6 @@ static int evdev_connect(struct input_handler *handler, struct input_dev *dev,
 	}
 
 	INIT_LIST_HEAD(&evdev->client_list);
-	spin_lock_init(&evdev->client_lock);
 	mutex_init(&evdev->mutex);
 	init_waitqueue_head(&evdev->wait);
 	evdev->exist = true;
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-07-20 20:03 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-20 18:48 [RFC PATCH] Input: evdev - drop redundant list-locking David Herrmann
2014-07-20 18:54 ` Dmitry Torokhov
2014-07-20 19:00   ` David Herrmann
2014-07-20 19:53     ` Dmitry Torokhov
2014-07-20 20:03 ` [PATCH v2] " David Herrmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).