From: Kay Sievers <kay.sievers@vrfy.org>
To: linux-hotplug@vger.kernel.org
Subject: Re: [patch] udevd - cleanup and better timeout handling
Date: Wed, 28 Jan 2004 21:47:36 +0000 [thread overview]
Message-ID: <20040128214736.GA16070@vrfy.org> (raw)
In-Reply-To: <20040125200314.GA8376@vrfy.org>
[-- Attachment #1: Type: text/plain, Size: 15285 bytes --]
On Tue, Jan 27, 2004 at 11:13:09AM -0800, Greg KH wrote:
> On Tue, Jan 27, 2004 at 08:08:09PM +0100, Kay Sievers wrote:
> > On Tue, Jan 27, 2004 at 07:56:04AM +0100, Kay Sievers wrote:
> > > On Mon, Jan 26, 2004 at 11:28:19AM -0800, Greg KH wrote:
> > > > On Mon, Jan 26, 2004 at 08:11:10PM +0100, Kay Sievers wrote:
> > > > > On Mon, Jan 26, 2004 at 10:22:34AM -0800, Greg KH wrote:
> > > > > > On Sun, Jan 25, 2004 at 09:03:14PM +0100, Kay Sievers wrote:
> > > > > > > 1. We are much too slow.
> > > > > > > We want to exec the real udev in the background, but a 'remove'
> > > > > > > is much much faster than a 'add', so we have a problem.
> > > > > > > Question: is it neccessary to order events for different devpath's?
> > > > > > > If no, we may wait_pid() for the exec only if we have another udev
> > > > > > > working on the same devpath.
> > > > > >
> > > > > > But how will we keep track of that? It's probably easier just to wait
> > > > > > for each one to finish, right?
> > > > >
> > > > > We leave the message in the queue until we reach the SIGCHLD for this
> > > > > pid. So we can search the queue if we are already working on this devpath,
> > > > > and delay the new exec until the former exec comes back.
> > > >
> > > > Ok, if that isn't too much trouble.
> > > >
> > > > > Is it feasible to run in parallel for different paths, or can you think
> > > > > of any problem?
> > > >
> > > > I can't think of any problem with that.
> > >
> > > Here is the next round. We have three queues now. All incoming messages
> > > are queued in msg_list and if nothing is missing we move it to the
> > > running_list and exec in the background.
> > > If the exec comes back, it removes the message from the running_list and
> > > frees the message.
> > >
> > > Before we exec, we check the running_list if there is a udev running on
> > > the same device path. If yes, we move the message to the delay_list. If
> > > the former exec comes back, we move the message to the running_list and
> > > exec it.
> >
> > Oh, sorry forget about it now.
> > I will come up with something better tested.
>
> Oops, I just applied this version :)
>
> I'll try testing this later on today.
Oh, couldn't resist to try threads.
It's a multithreaded udevd that communicates through a localhost socket.
The message includes a magic with the udev version, so we don't accept
older udevsend's.
No need for locking, cause we can't bind two sockets on the same address.
The daemon tries to connect and if it fails it starts the daemon.
We create a thread for every incoming connection, handle over the socket,
sort the messages in the global message queue and exit the thread.
Huh, that was easy with threads :)
With the addition of a message we wakeup the queue manager thread and
handle timeouts or move the message to the global exec list. This wakes
up the exec list manager who looks if a process is already running for this
device path.
If yes, the exec is delayed otherwise we create a thread that execs udev.
n the background. With the return of udev we free the message and wakeup
the exec list manager to look if something is pending.
It is just a quick shot, cause I couldn't solve the problems with fork an
scheduling and I wanted to see if I'm to stupid :)
But if anybody with a better idea or more experience with I/O scheduling
we may go another way. The remaining problem is that klibc doesn't support
threads.
By now, we don't exec anything, it's just a sleep 3 for every exec,
but you can see the queue management by watching syslog and do:
DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
thanks,
Kay
The result of the test script follows. I've sent SEQNUM=0 before starting the
script, so we don't wait for the initial timeout. Look at exec 9 -> 12 -> 10
for the DEVPATH logic:
Jan 28 22:23:41 pim udev[24687]: main: connect failed, try starting daemon...
Jan 28 22:23:41 pim udev[24689]: msg_queue_manager: msg queue manager, next expected is 0
Jan 28 22:23:41 pim udev[24689]: exec_queue_manager: exec list manager
Jan 28 22:23:41 pim udev[24687]: main: daemon started
Jan 28 22:23:41 pim udev[24689]: msg_queue_insert: add message seq 0
Jan 28 22:23:41 pim udev[24689]: msg_queue_manager: msg queue manager, next expected is 0
Jan 28 22:23:41 pim udev[24689]: msg_queue_manager: moved seq 0 to exec, next expected is 1
Jan 28 22:23:41 pim udev[24689]: exec_queue_manager: exec list manager
Jan 28 22:23:41 pim udev[24689]: run_threads: ==> exec seq 0 working at '/'
Jan 28 22:23:41 pim udev[24689]: exec_queue_manager: moved seq 0 to running list
Jan 28 22:23:44 pim udev[24689]: run_threads: <== exec seq 0 came back
Jan 28 22:23:44 pim udev[24689]: exec_queue_manager: exec list manager
Jan 28 22:23:51 pim udev[24689]: msg_queue_insert: add message seq 3
Jan 28 22:23:51 pim udev[24689]: msg_queue_manager: msg queue manager, next expected is 1
Jan 28 22:23:51 pim udev[24689]: msg_queue_insert: add message seq 1
Jan 28 22:23:51 pim udev[24689]: msg_queue_insert: add message seq 2
Jan 28 22:23:51 pim udev[24689]: msg_queue_insert: add message seq 4
Jan 28 22:23:51 pim udev[24689]: msg_queue_insert: add message seq 5
Jan 28 22:23:51 pim udev[24689]: msg_queue_insert: add message seq 8
Jan 28 22:23:51 pim udev[24689]: msg_queue_insert: add message seq 6
Jan 28 22:23:51 pim udev[24689]: msg_queue_manager: moved seq 1 to exec, next expected is 2
Jan 28 22:23:51 pim udev[24689]: msg_queue_manager: moved seq 2 to exec, next expected is 3
Jan 28 22:23:51 pim udev[24689]: msg_queue_manager: moved seq 3 to exec, next expected is 4
Jan 28 22:23:51 pim udev[24689]: msg_queue_manager: moved seq 4 to exec, next expected is 5
Jan 28 22:23:51 pim udev[24689]: msg_queue_manager: moved seq 5 to exec, next expected is 6
Jan 28 22:23:51 pim udev[24689]: msg_queue_manager: moved seq 6 to exec, next expected is 7
Jan 28 22:23:51 pim udev[24689]: msg_dump_queue: sequence 8 in queue
Jan 28 22:23:51 pim udev[24689]: msg_queue_manager: next event expires in 10 seconds
Jan 28 22:23:51 pim udev[24689]: exec_queue_manager: exec list manager
Jan 28 22:23:51 pim udev[24689]: run_threads: ==> exec seq 1 working at '/block/sda'
Jan 28 22:23:51 pim udev[24689]: exec_queue_manager: moved seq 1 to running list
Jan 28 22:23:51 pim udev[24689]: run_threads: ==> exec seq 2 working at '/block/sdb'
Jan 28 22:23:51 pim udev[24689]: exec_queue_manager: moved seq 2 to running list
Jan 28 22:23:51 pim udev[24689]: run_threads: ==> exec seq 3 working at '/block/sdc'
Jan 28 22:23:51 pim udev[24689]: exec_queue_manager: moved seq 3 to running list
Jan 28 22:23:51 pim udev[24689]: exec_queue_manager: delay seq 4, cause seq 3 already working on '/block/sdc'
Jan 28 22:23:51 pim udev[24689]: exec_queue_manager: delay seq 5, cause seq 2 already working on '/block/sdb'
Jan 28 22:23:51 pim udev[24689]: exec_queue_manager: delay seq 6, cause seq 1 already working on '/block/sda'
Jan 28 22:23:53 pim udev[24689]: msg_queue_insert: add message seq 9
Jan 28 22:23:53 pim udev[24689]: msg_queue_insert: add message seq 11
Jan 28 22:23:53 pim udev[24689]: msg_queue_insert: add message seq 10
Jan 28 22:23:53 pim udev[24689]: msg_queue_manager: msg queue manager, next expected is 7
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 8 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 9 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 10 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 11 in queue
Jan 28 22:23:53 pim udev[24689]: msg_queue_manager: next event expires in 8 seconds
Jan 28 22:23:53 pim udev[24689]: msg_queue_insert: add message seq 13
Jan 28 22:23:53 pim udev[24689]: msg_queue_manager: msg queue manager, next expected is 7
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 8 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 9 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 10 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 11 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 13 in queue
Jan 28 22:23:53 pim udev[24689]: msg_queue_manager: next event expires in 8 seconds
Jan 28 22:23:53 pim udev[24689]: msg_queue_insert: add message seq 14
Jan 28 22:23:53 pim udev[24689]: msg_queue_manager: msg queue manager, next expected is 7
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 8 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 9 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 10 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 11 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 13 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 14 in queue
Jan 28 22:23:53 pim udev[24689]: msg_queue_manager: next event expires in 8 seconds
Jan 28 22:23:53 pim udev[24689]: msg_queue_insert: add message seq 15
Jan 28 22:23:53 pim udev[24689]: msg_queue_manager: msg queue manager, next expected is 7
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 8 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 9 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 10 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 11 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 13 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 14 in queue
Jan 28 22:23:53 pim udev[24689]: msg_dump_queue: sequence 15 in queue
Jan 28 22:23:53 pim udev[24689]: msg_queue_manager: next event expires in 8 seconds
Jan 28 22:23:54 pim udev[24689]: run_threads: <== exec seq 1 came back
Jan 28 22:23:54 pim udev[24689]: run_threads: <== exec seq 2 came back
Jan 28 22:23:54 pim udev[24689]: run_threads: <== exec seq 3 came back
Jan 28 22:23:54 pim udev[24689]: exec_queue_manager: exec list manager
Jan 28 22:23:54 pim udev[24689]: run_threads: ==> exec seq 4 working at '/block/sdc'
Jan 28 22:23:54 pim udev[24689]: exec_queue_manager: moved seq 4 to running list
Jan 28 22:23:54 pim udev[24689]: run_threads: ==> exec seq 5 working at '/block/sdb'
Jan 28 22:23:54 pim udev[24689]: exec_queue_manager: moved seq 5 to running list
Jan 28 22:23:54 pim udev[24689]: run_threads: ==> exec seq 6 working at '/block/sda'
Jan 28 22:23:54 pim udev[24689]: exec_queue_manager: moved seq 6 to running list
Jan 28 22:23:55 pim udev[24689]: msg_queue_insert: add message seq 12
Jan 28 22:23:55 pim udev[24689]: msg_queue_manager: msg queue manager, next expected is 7
Jan 28 22:23:55 pim udev[24689]: msg_dump_queue: sequence 8 in queue
Jan 28 22:23:55 pim udev[24689]: msg_dump_queue: sequence 9 in queue
Jan 28 22:23:55 pim udev[24689]: msg_dump_queue: sequence 10 in queue
Jan 28 22:23:55 pim udev[24689]: msg_dump_queue: sequence 11 in queue
Jan 28 22:23:55 pim udev[24689]: msg_dump_queue: sequence 12 in queue
Jan 28 22:23:55 pim udev[24689]: msg_dump_queue: sequence 13 in queue
Jan 28 22:23:55 pim udev[24689]: msg_dump_queue: sequence 14 in queue
Jan 28 22:23:55 pim udev[24689]: msg_dump_queue: sequence 15 in queue
Jan 28 22:23:55 pim udev[24689]: msg_queue_manager: next event expires in 6 seconds
Jan 28 22:23:57 pim udev[24689]: run_threads: <== exec seq 4 came back
Jan 28 22:23:57 pim udev[24689]: run_threads: <== exec seq 5 came back
Jan 28 22:23:57 pim udev[24689]: run_threads: <== exec seq 6 came back
Jan 28 22:23:57 pim udev[24689]: exec_queue_manager: exec list manager
Jan 28 22:24:01 pim udev[24689]: msg_queue_manager: msg queue manager, next expected is 7
Jan 28 22:24:01 pim udev[24689]: msg_queue_manager: moved seq 8 to exec, reset next expected to 9
Jan 28 22:24:01 pim udev[24689]: msg_queue_manager: moved seq 9 to exec, next expected is 10
Jan 28 22:24:01 pim udev[24689]: msg_queue_manager: moved seq 10 to exec, next expected is 11
Jan 28 22:24:01 pim udev[24689]: msg_queue_manager: moved seq 11 to exec, next expected is 12
Jan 28 22:24:01 pim udev[24689]: msg_queue_manager: moved seq 12 to exec, next expected is 13
Jan 28 22:24:01 pim udev[24689]: msg_queue_manager: moved seq 13 to exec, next expected is 14
Jan 28 22:24:01 pim udev[24689]: msg_queue_manager: moved seq 14 to exec, next expected is 15
Jan 28 22:24:01 pim udev[24689]: msg_queue_manager: moved seq 15 to exec, next expected is 16
Jan 28 22:24:01 pim udev[24689]: exec_queue_manager: exec list manager
Jan 28 22:24:01 pim udev[24689]: run_threads: ==> exec seq 8 working at '/block/sdb'
Jan 28 22:24:01 pim udev[24689]: exec_queue_manager: moved seq 8 to running list
Jan 28 22:24:01 pim udev[24689]: run_threads: ==> exec seq 9 working at '/block/sdc'
Jan 28 22:24:01 pim udev[24689]: exec_queue_manager: moved seq 9 to running list
Jan 28 22:24:01 pim udev[24689]: exec_queue_manager: delay seq 10, cause seq 9 already working on '/block/sdc'
Jan 28 22:24:01 pim udev[24689]: exec_queue_manager: delay seq 11, cause seq 8 already working on '/block/sdb'
Jan 28 22:24:01 pim udev[24689]: run_threads: ==> exec seq 12 working at '/block/sda'
Jan 28 22:24:01 pim udev[24689]: exec_queue_manager: moved seq 12 to running list
Jan 28 22:24:01 pim udev[24689]: exec_queue_manager: delay seq 13, cause seq 12 already working on '/block/sda'
Jan 28 22:24:01 pim udev[24689]: exec_queue_manager: delay seq 14, cause seq 8 already working on '/block/sdb'
Jan 28 22:24:01 pim udev[24689]: exec_queue_manager: delay seq 15, cause seq 9 already working on '/block/sdc'
Jan 28 22:24:04 pim udev[24689]: run_threads: <== exec seq 8 came back
Jan 28 22:24:04 pim udev[24689]: run_threads: <== exec seq 9 came back
Jan 28 22:24:04 pim udev[24689]: run_threads: <== exec seq 12 came back
Jan 28 22:24:04 pim udev[24689]: exec_queue_manager: exec list manager
Jan 28 22:24:04 pim udev[24689]: run_threads: ==> exec seq 10 working at '/block/sdc'
Jan 28 22:24:04 pim udev[24689]: exec_queue_manager: moved seq 10 to running list
Jan 28 22:24:04 pim udev[24689]: run_threads: ==> exec seq 11 working at '/block/sdb'
Jan 28 22:24:04 pim udev[24689]: exec_queue_manager: moved seq 11 to running list
Jan 28 22:24:04 pim udev[24689]: run_threads: ==> exec seq 13 working at '/block/sda'
Jan 28 22:24:04 pim udev[24689]: exec_queue_manager: moved seq 13 to running list
Jan 28 22:24:04 pim udev[24689]: exec_queue_manager: delay seq 14, cause seq 11 already working on '/block/sdb'
Jan 28 22:24:04 pim udev[24689]: exec_queue_manager: delay seq 15, cause seq 10 already working on '/block/sdc'
Jan 28 22:24:07 pim udev[24689]: run_threads: <== exec seq 10 came back
Jan 28 22:24:07 pim udev[24689]: run_threads: <== exec seq 11 came back
Jan 28 22:24:07 pim udev[24689]: run_threads: <== exec seq 13 came back
Jan 28 22:24:07 pim udev[24689]: exec_queue_manager: exec list manager
Jan 28 22:24:07 pim udev[24689]: run_threads: ==> exec seq 14 working at '/block/sdb'
Jan 28 22:24:07 pim udev[24689]: exec_queue_manager: moved seq 14 to running list
Jan 28 22:24:07 pim udev[24689]: run_threads: ==> exec seq 15 working at '/block/sdc'
Jan 28 22:24:07 pim udev[24689]: exec_queue_manager: moved seq 15 to running list
Jan 28 22:24:10 pim udev[24689]: run_threads: <== exec seq 14 came back
Jan 28 22:24:10 pim udev[24689]: run_threads: <== exec seq 15 came back
Jan 28 22:24:10 pim udev[24689]: exec_queue_manager: exec list manager
[-- Attachment #2: 01-udevd-threads.patch --]
[-- Type: text/plain, Size: 20725 bytes --]
===== Makefile 1.93 vs edited =====
--- 1.93/Makefile Tue Jan 27 08:29:28 2004
+++ edited/Makefile Wed Jan 28 22:21:26 2004
@@ -255,7 +255,7 @@
$(STRIPCMD) $@
$(DAEMON): udevd.h udevd.o udevd.o logging.o
- $(LD) $(LDFLAGS) -o $@ $(CRT0) udevd.o logging.o $(LIB_OBJS) $(ARCH_LIB_OBJS)
+ $(LD) $(LDFLAGS) -lpthread -o $@ $(CRT0) udevd.o logging.o $(LIB_OBJS) $(ARCH_LIB_OBJS)
$(STRIPCMD) $@
$(SENDER): udevd.h udevsend.o udevd.o logging.o
===== udevd.c 1.7 vs edited =====
--- 1.7/udevd.c Wed Jan 28 19:52:44 2004
+++ edited/udevd.c Wed Jan 28 22:22:04 2004
@@ -3,7 +3,6 @@
*
* Userspace devfs
*
- * Copyright (C) 2004 Ling, Xiaofeng <xiaofeng.ling@intel.com>
* Copyright (C) 2004 Kay Sievers <kay.sievers@vrfy.org>
*
*
@@ -24,7 +23,6 @@
#include <stddef.h>
#include <sys/types.h>
-#include <sys/ipc.h>
#include <sys/wait.h>
#include <sys/msg.h>
#include <signal.h>
@@ -35,6 +33,11 @@
#include <string.h>
#include <time.h>
#include <fcntl.h>
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <netinet/in.h>
+#include <netdb.h>
+#include <pthread.h>
#include "list.h"
#include "udev.h"
@@ -43,302 +46,311 @@
#include "logging.h"
-#define BUFFER_SIZE 1024
+static pthread_mutex_t msg_lock;
+static pthread_mutex_t msg_active_lock;
+static pthread_cond_t msg_active;
+
+static pthread_mutex_t exec_lock;
+static pthread_mutex_t exec_active_lock;
+static pthread_cond_t exec_active;
-static int running_remove_queue(pid_t pid);
-static int msg_exec(struct hotplug_msg *msg);
+static pthread_mutex_t running_lock;
-static int expect_seqnum = 0;
-static int lock_file = -1;
-static char *lock_filename = ".udevd_lock";
+static int expected_seqnum = 0;
LIST_HEAD(msg_list);
+LIST_HEAD(exec_list);
LIST_HEAD(running_list);
-LIST_HEAD(delayed_list);
-static void sig_handler(int signum)
+
+static void msg_dump_queue(void)
{
- pid_t pid;
+ struct hotplug_msg *msg;
- dbg("caught signal %d", signum);
- switch (signum) {
- case SIGALRM:
- dbg("event timeout reached");
- break;
- case SIGCHLD:
- /* catch signals from exiting childs */
- while ( (pid = waitpid(-1, NULL, WNOHANG)) > 0) {
- dbg("exec finished, pid %d", pid);
- running_remove_queue(pid);
- }
- break;
- case SIGINT:
- case SIGTERM:
- if (lock_file >= 0) {
- close(lock_file);
- unlink(lock_filename);
- }
- exit(20 + signum);
- break;
- default:
- dbg("unhandled signal");
- }
+ list_for_each_entry(msg, &msg_list, list)
+ dbg("sequence %d in queue", msg->seqnum);
}
-static void set_timeout(int seconds)
+static void msg_dump(struct hotplug_msg *msg)
{
- alarm(seconds);
- dbg("set timeout in %d seconds", seconds);
+ dbg("sequence %d, '%s', '%s', '%s'",
+ msg->seqnum, msg->action, msg->devpath, msg->subsystem);
}
-static int running_moveto_queue(struct hotplug_msg *msg)
+/* allocates a new message */
+static struct hotplug_msg *msg_create(void)
{
- dbg("move sequence %d [%d] to running queue '%s'",
- msg->seqnum, msg->pid, msg->devpath);
- list_move_tail(&msg->list, &running_list);
- return 0;
+ struct hotplug_msg *new_msg;
+
+ new_msg = malloc(sizeof(struct hotplug_msg));
+ if (new_msg == NULL) {
+ dbg("error malloc");
+ return NULL;
+ }
+ memset(new_msg, 0x00, sizeof(struct hotplug_msg));
+ return new_msg;
}
-static int running_remove_queue(pid_t pid)
+/* orders the message in the queue by sequence number */
+static void msg_queue_insert(struct hotplug_msg *msg)
{
- struct hotplug_msg *child;
- struct hotplug_msg *tmp_child;
+ struct hotplug_msg *loop_msg;
- list_for_each_entry_safe(child, tmp_child, &running_list, list)
- if (child->pid == pid) {
- list_del_init(&child->list);
- free(child);
- return 0;
- }
- return -EINVAL;
-}
+ dbg("add message seq %d", msg->seqnum);
-static pid_t running_getpid_by_devpath(struct hotplug_msg *msg)
-{
- struct hotplug_msg *child;
- struct hotplug_msg *tmp_child;
+ /* sort message by sequence number into list*/
+ list_for_each_entry(loop_msg, &msg_list, list)
+ if (loop_msg->seqnum > msg->seqnum)
+ break;
+ list_add_tail(&msg->list, &loop_msg->list);
- list_for_each_entry_safe(child, tmp_child, &running_list, list)
- if (strncmp(child->devpath, msg->devpath, sizeof(child->devpath)) == 0)
- return child->pid;
- return 0;
-}
+ /* store timestamp of queuing */
+ msg->queue_time = time(NULL);
-static void delayed_dump_queue(void)
-{
- struct hotplug_msg *child;
+ /* signal queue activity to manager */
+ pthread_mutex_lock(&msg_active_lock);
+ pthread_cond_signal(&msg_active);
+ pthread_mutex_unlock(&msg_active_lock);
- list_for_each_entry(child, &delayed_list, list)
- dbg("event for '%s' in queue", child->devpath);
+ return ;
}
-static int delayed_moveto_queue(struct hotplug_msg *msg)
+/* forks event and removes event from run queue when finished */
+static void *run_threads(void * parm)
{
- dbg("move event to delayed queue '%s'", msg->devpath);
- list_move_tail(&msg->list, &delayed_list);
- return 0;
-}
+ struct hotplug_msg *run_msg;
-static void delayed_check_queue(void)
-{
- struct hotplug_msg *delayed_child;
- struct hotplug_msg *running_child;
- struct hotplug_msg *tmp_child;
-
- /* see if we have delayed exec's that can run now */
- list_for_each_entry_safe(delayed_child, tmp_child, &delayed_list, list)
- list_for_each_entry_safe(running_child, tmp_child, &running_list, list)
- if (strncmp(delayed_child->devpath, running_child->devpath,
- sizeof(running_child->devpath)) == 0) {
- dbg("delayed exec for '%s' can run now", delayed_child->devpath);
- msg_exec(delayed_child);
- }
+ run_msg = parm;
+
+ dbg("==> exec seq %d working at '%s'", run_msg->seqnum, run_msg->devpath);
+ sleep(3);
+ dbg("<== exec seq %d came back", run_msg->seqnum);
+
+ /* remove event from run list */
+ pthread_mutex_lock(&running_lock);
+ list_del_init(&run_msg->list);
+ pthread_mutex_unlock(&running_lock);
+
+ free(run_msg);
+
+ /* signal queue activity to exec manager */
+ pthread_mutex_lock(&exec_active_lock);
+ pthread_cond_signal(&exec_active);
+ pthread_mutex_unlock(&exec_active_lock);
+
+ pthread_exit(0);
}
-static void msg_dump(struct hotplug_msg *msg)
+/* returns already running task with devpath */
+static struct hotplug_msg *running_with_devpath(struct hotplug_msg *msg)
{
- dbg("sequence %d, '%s', '%s', '%s'",
- msg->seqnum, msg->action, msg->devpath, msg->subsystem);
+ struct hotplug_msg *loop_msg;
+ struct hotplug_msg *tmp_msg;
+
+ list_for_each_entry_safe(loop_msg, tmp_msg, &running_list, list)
+ if (strncmp(loop_msg->devpath, msg->devpath, sizeof(loop_msg->devpath)) == 0)
+ return loop_msg;
+ return NULL;
}
-static int msg_exec(struct hotplug_msg *msg)
+/* queue management executes the events and delays events for the same devpath */
+static void *exec_queue_manager(void * parm)
{
- pid_t pid;
+ struct hotplug_msg *loop_msg;
+ struct hotplug_msg *tmp_msg;
+ struct hotplug_msg *msg;
+ pthread_t run_tid;
- msg_dump(msg);
+ while (1) {
+ dbg("exec list manager");
+ pthread_mutex_lock(&exec_lock);
- setenv("ACTION", msg->action, 1);
- setenv("DEVPATH", msg->devpath, 1);
+ list_for_each_entry_safe(loop_msg, tmp_msg, &exec_list, list) {
+ msg = running_with_devpath(loop_msg);
+ if (msg == NULL) {
+ /* move event to run list */
+ pthread_mutex_lock(&running_lock);
+ list_move_tail(&loop_msg->list, &running_list);
+ pthread_mutex_unlock(&running_lock);
+
+ pthread_create(&run_tid, NULL, run_threads, (void *) loop_msg);
+
+ dbg("moved seq %d to running list", loop_msg->seqnum);
+ } else {
+ dbg("delay seq %d, cause seq %d already working on '%s'",
+ loop_msg->seqnum, msg->seqnum, msg->devpath);
+ }
+ }
- /* delay exec, if we already have a udev working on the same devpath */
- pid = running_getpid_by_devpath(msg);
- if (pid != 0) {
- dbg("delay exec of sequence %d, [%d] already working on '%s'",
- msg->seqnum, pid, msg->devpath);
- delayed_moveto_queue(msg);
- }
+ pthread_mutex_unlock(&exec_lock);
- pid = fork();
- switch (pid) {
- case 0:
- /* child */
- execl(UDEV_BIN, "udev", msg->subsystem, NULL);
- dbg("exec of child failed");
- exit(1);
- break;
- case -1:
- dbg("fork of child failed");
- return -1;
- default:
- /* exec in background, get the SIGCHLD with the sig handler */
- msg->pid = pid;
- running_moveto_queue(msg);
- break;
+ /* wait for activation, new events or childs coming back */
+ pthread_mutex_lock(&exec_active_lock);
+ pthread_cond_wait(&exec_active, &exec_active_lock);
+ pthread_mutex_unlock(&exec_active_lock);
}
- return 0;
}
-static void msg_dump_queue(void)
+/* move message from incoming to exec queue */
+static void msg_move_exec(struct list_head *head)
{
- struct hotplug_msg *msg;
-
- list_for_each_entry(msg, &msg_list, list)
- dbg("sequence %d in queue", msg->seqnum);
+ list_move_tail(head, &exec_list);
+ /* signal queue activity to manager */
+ pthread_mutex_lock(&exec_active_lock);
+ pthread_cond_signal(&exec_active);
+ pthread_mutex_unlock(&exec_active_lock);
}
-static void msg_check_queue(void)
+/* queue management thread handles the timeouts and dispatches the events */
+static void *msg_queue_manager(void * parm)
{
- struct hotplug_msg *msg;
+ struct hotplug_msg *loop_msg;
struct hotplug_msg *tmp_msg;
- time_t msg_age;
+ time_t msg_age = 0;
+ struct timespec tv;
+
+ while (1) {
+ dbg("msg queue manager, next expected is %d", expected_seqnum);
+ pthread_mutex_lock(&msg_lock);
+ pthread_mutex_lock(&exec_lock);
recheck:
- /* dispatch events until one is missing */
- list_for_each_entry_safe(msg, tmp_msg, &msg_list, list) {
- if (msg->seqnum != expect_seqnum)
- break;
- msg_exec(msg);
- expect_seqnum++;
- }
+ list_for_each_entry_safe(loop_msg, tmp_msg, &msg_list, list) {
+ /* move event with expected sequence to the exec list */
+ if (loop_msg->seqnum == expected_seqnum) {
+ msg_move_exec(&loop_msg->list);
+ expected_seqnum++;
+ dbg("moved seq %d to exec, next expected is %d",
+ loop_msg->seqnum, expected_seqnum);
+ continue;
+ }
- /* recalculate next timeout */
- if (list_empty(&msg_list) == 0) {
- msg_age = time(NULL) - msg->queue_time;
- if (msg_age > EVENT_TIMEOUT_SEC-1) {
- info("event %d, age %li seconds, skip event %d-%d",
- msg->seqnum, msg_age, expect_seqnum, msg->seqnum-1);
- expect_seqnum = msg->seqnum;
- goto recheck;
+ /* move event with expired timeout to the exec list */
+ msg_age = time(NULL) - loop_msg->queue_time;
+ if (msg_age > EVENT_TIMEOUT_SEC-1) {
+ msg_move_exec(&loop_msg->list);
+ expected_seqnum = loop_msg->seqnum+1;
+ dbg("moved seq %d to exec, reset next expected to %d",
+ loop_msg->seqnum, expected_seqnum);
+ goto recheck;
+ } else {
+ break;
+ }
}
- /* the first sequence gets its own timeout */
- if (expect_seqnum == 0) {
- msg_age = EVENT_TIMEOUT_SEC - FIRST_EVENT_TIMEOUT_SEC;
- expect_seqnum = 1;
+ msg_dump_queue();
+ pthread_mutex_unlock(&exec_lock);
+ pthread_mutex_unlock(&msg_lock);
+
+ /* wait until queue gets active or next message timeout expires */
+ pthread_mutex_lock(&msg_active_lock);
+
+ if (list_empty(&msg_list) == 0) {
+ tv.tv_sec = time(NULL) + EVENT_TIMEOUT_SEC - msg_age;
+ tv.tv_nsec = 0;
+ dbg("next event expires in %li seconds",
+ EVENT_TIMEOUT_SEC - msg_age);
+ pthread_cond_timedwait(&msg_active, &msg_active_lock, &tv);
+ } else {
+ pthread_cond_wait(&msg_active, &msg_active_lock);
}
- set_timeout(EVENT_TIMEOUT_SEC - msg_age);
- return;
+ pthread_mutex_unlock(&msg_active_lock);
}
}
-static int msg_add_queue(struct hotplug_msg *msg)
+/* every connect creates a thread which gets the msg, queues it and exits */
+static void *client_threads(void * parm)
{
- struct hotplug_msg *new_msg;
- struct hotplug_msg *tmp_msg;
+ int sock;
+ struct hotplug_msg *msg;
+ int retval;
- new_msg = malloc(sizeof(*new_msg));
- if (new_msg == NULL) {
- dbg("error malloc");
- return -ENOMEM;
+ sock = (int) parm;
+ msg = msg_create();
+ if (msg == NULL) {
+ dbg("unable to store message");
+ goto exit;
}
- memcpy(new_msg, msg, sizeof(*new_msg));
- /* store timestamp of queuing */
- new_msg->queue_time = time(NULL);
+ retval = recv(sock, msg, sizeof(struct hotplug_msg), 0);
+ if (retval < 0) {
+ dbg("unable to receive message");
+ goto exit;
+ }
- /* sort message by sequence number into list*/
- list_for_each_entry(tmp_msg, &msg_list, list)
- if (tmp_msg->seqnum > new_msg->seqnum)
- break;
- list_add_tail(&new_msg->list, &tmp_msg->list);
+ if (strncmp(msg->magic, UDEV_MAGIC, sizeof(UDEV_MAGIC)) != 0 ) {
+ dbg("message magic '%s' doesn't match, ignore it", msg->magic);
+ goto exit;
+ }
- return 0;
+ pthread_mutex_lock(&msg_lock);
+ msg_queue_insert(msg);
+ pthread_mutex_unlock(&msg_lock);
+
+exit:
+ close(sock);
+ pthread_exit(0);
}
-static void work(void)
+int main(int argc, char *argv[])
{
- struct hotplug_msg *msg;
- int msgid;
- key_t key;
- char buf[BUFFER_SIZE];
- int ret;
-
- key = ftok(UDEVD_BIN, IPC_KEY_ID);
- msg = (struct hotplug_msg *) buf;
- msgid = msgget(key, IPC_CREAT);
- if (msgid == -1) {
- dbg("open message queue error");
+ pthread_t cli_tid;
+ pthread_t mgr_msg_tid;
+ pthread_t mgr_exec_tid;
+ int ssock;
+ int csock;
+ struct sockaddr_in saddr;
+ struct sockaddr_in caddr;
+ socklen_t clen;
+ int retval;
+
+ pthread_mutex_init(&msg_lock, NULL);
+ pthread_mutex_init(&msg_active_lock, NULL);
+ pthread_mutex_init(&exec_lock, NULL);
+ pthread_mutex_init(&exec_active_lock, NULL);
+ pthread_mutex_init(&running_lock, NULL);
+
+ ssock = socket(AF_INET, SOCK_STREAM, 0);
+ if (ssock == -1) {
+ dbg("error getting socket");
exit(1);
}
- while (1) {
- ret = msgrcv(msgid, (struct msgbuf *) buf, BUFFER_SIZE-4, HOTPLUGMSGTYPE, 0);
- if (ret != -1) {
- dbg("received sequence %d, expected sequence %d", msg->seqnum, expect_seqnum);
- if (msg->seqnum >= expect_seqnum) {
- msg_add_queue(msg);
- msg_dump_queue();
- msg_check_queue();
- continue;
- }
- dbg("too late for event with sequence %d, event skipped ", msg->seqnum);
- } else {
- if (errno == EINTR) {
- msg_check_queue();
- msg_dump_queue();
- delayed_check_queue();
- delayed_dump_queue();
- continue;
- }
- dbg("ipc message receive error '%s'", strerror(errno));
- }
- }
-}
-static int one_and_only(void)
-{
- char string[100];
-
- lock_file = open(lock_filename, O_RDWR | O_CREAT, 0x640);
-
- /* see if we can open */
- if (lock_file < 0)
- return -1;
-
- /* see if we can lock */
- if (lockf(lock_file, F_TLOCK, 0) < 0) {
- close(lock_file);
- return -1;
+ memset(&saddr, 0x00, sizeof(saddr));
+ saddr.sin_family = AF_INET;
+ saddr.sin_port = htons(UDEVD_PORT);
+ saddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+
+ retval = bind(ssock, &saddr, sizeof(saddr));
+ if (retval < 0) {
+ dbg("bind failed\n");
+ goto exit;
}
- snprintf(string, sizeof(string), "%d\n", getpid());
- write(lock_file, string, strlen(string));
+ retval = listen(ssock, SOMAXCONN);
+ if (retval < 0) {
+ dbg("listen failed\n");
+ goto exit;
+ }
- return 0;
-}
+ /* init queue management */
+ pthread_create(&mgr_msg_tid, NULL, msg_queue_manager, NULL);
+ pthread_create(&mgr_exec_tid, NULL, exec_queue_manager, NULL);
-int main(int argc, char *argv[])
-{
- /* only let one version of the daemon run at any one time */
- if (one_and_only() != 0)
- exit(0);
-
- /* set up signal handler */
- signal(SIGINT, sig_handler);
- signal(SIGTERM, sig_handler);
- signal(SIGALRM, sig_handler);
- signal(SIGCHLD, sig_handler);
+ clen = sizeof(caddr);
+ /* main loop */
+ while (1) {
+ csock = accept(ssock, &caddr, &clen);
+ if (csock < 0) {
+ if (errno == EINTR)
+ continue;
+ dbg("client accept failed\n");
+ }
- work();
+ pthread_create(&cli_tid, NULL, client_threads, (void *) csock);
+ }
+exit:
+ close(ssock);
exit(0);
}
===== udevd.h 1.4 vs edited =====
--- 1.4/udevd.h Tue Jan 27 08:29:28 2004
+++ edited/udevd.h Wed Jan 28 22:21:28 2004
@@ -23,16 +23,18 @@
#include "list.h"
+#define UDEV_MAGIC "udev_" UDEV_VERSION
#define FIRST_EVENT_TIMEOUT_SEC 1
-#define EVENT_TIMEOUT_SEC 5
-#define UDEVSEND_RETRY_COUNT 50 /* x 10 millisec */
+#define EVENT_TIMEOUT_SEC 10
+#define UDEVD_START_TIMEOUT 20 /* x 100 millisec */
+#define UDEVD_PORT 44444
#define IPC_KEY_ID 1
#define HOTPLUGMSGTYPE 44
struct hotplug_msg {
- long mtype;
+ char magic[20];
struct list_head list;
pid_t pid;
int seqnum;
===== udevsend.c 1.8 vs edited =====
--- 1.8/udevsend.c Tue Jan 27 08:29:28 2004
+++ edited/udevsend.c Wed Jan 28 22:21:28 2004
@@ -32,12 +32,29 @@
#include <unistd.h>
#include <time.h>
#include <wait.h>
+#include <netinet/in.h>
#include "udev.h"
#include "udev_version.h"
#include "udevd.h"
#include "logging.h"
+
+static void sig_handler(int signum)
+{
+ switch (signum) {
+ case SIGALRM:
+ dbg("timeout");
+ case SIGINT:
+ case SIGTERM:
+ exit(20 + signum);
+ break;
+ default:
+ dbg("unhandled signal");
+ }
+}
+
+
static inline char *get_action(void)
{
char *action;
@@ -46,6 +63,7 @@
return action;
}
+
static inline char *get_devpath(void)
{
char *devpath;
@@ -54,6 +72,7 @@
return devpath;
}
+
static inline char *get_seqnum(void)
{
char *seqnum;
@@ -62,11 +81,12 @@
return seqnum;
}
+
static int build_hotplugmsg(struct hotplug_msg *msg, char *action,
char *devpath, char *subsystem, int seqnum)
{
memset(msg, 0x00, sizeof(*msg));
- msg->mtype = HOTPLUGMSGTYPE;
+ strfieldcpy(msg->magic, UDEV_MAGIC);
msg->seqnum = seqnum;
strncpy(msg->action, action, 8);
strncpy(msg->devpath, devpath, 128);
@@ -74,6 +94,7 @@
return sizeof(struct hotplug_msg);
}
+
static int start_daemon(void)
{
pid_t pid;
@@ -111,9 +132,6 @@
int main(int argc, char* argv[])
{
- int msgid;
- key_t key;
- struct msqid_ds msg_queue;
struct hotplug_msg message;
char *action;
char *devpath;
@@ -124,6 +142,8 @@
int size;
int loop;
struct timespec tspec;
+ int sock;
+ struct sockaddr_in saddr;
subsystem = argv[1];
if (subsystem == NULL) {
@@ -150,41 +170,62 @@
}
seq = atoi(seqnum);
- /* create ipc message queue or get id of our existing one */
- key = ftok(UDEVD_BIN, IPC_KEY_ID);
- dbg("using ipc queue 0x%0x", key);
- size = build_hotplugmsg(&message, action, devpath, subsystem, seq);
- msgid = msgget(key, IPC_CREAT);
- if (msgid == -1) {
- dbg("error open ipc queue");
+ sock = socket(AF_INET, SOCK_STREAM, 0);
+ if (sock == -1) {
+ dbg("error getting socket");
goto exit;
}
- /* send ipc message to the daemon */
- retval = msgsnd(msgid, &message, size, 0);
- if (retval == -1) {
- dbg("error sending ipc message");
- goto exit;
+ memset(&saddr, 0x00, sizeof(saddr));
+ saddr.sin_family = AF_INET;
+ saddr.sin_port = htons(UDEVD_PORT);
+ saddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+
+ signal(SIGINT, sig_handler);
+ signal(SIGTERM, sig_handler);
+ signal(SIGALRM, sig_handler);
+
+ /* try to connect, if it fails start daemon */
+ retval = connect(sock, &saddr, sizeof(saddr));
+ if (retval != -1) {
+ goto send;
+ } else {
+ dbg("connect failed, try starting daemon...");
+ retval = start_daemon();
+ if (retval == 0) {
+ dbg("daemon started");
+ } else {
+ dbg("error starting daemon");
+ goto exit;
+ }
}
- /* get state of ipc queue */
+ /* try to connect while daemon to starts */
tspec.tv_sec = 0;
- tspec.tv_nsec = 10000000; /* 10 millisec */
- loop = UDEVSEND_RETRY_COUNT;
+ tspec.tv_nsec = 100000000; /* 100 millisec */
+ loop = UDEVD_START_TIMEOUT;
while (loop--) {
- retval = msgctl(msgid, IPC_STAT, &msg_queue);
- if (retval == -1) {
- dbg("error getting info on ipc queue");
- goto exit;
- }
- if (msg_queue.msg_qnum == 0)
- goto exit;
+ retval = connect(sock, &saddr, sizeof(saddr));
+ if (retval != -1)
+ goto send;
+ else
+ dbg("retry to connect %d", UDEVD_START_TIMEOUT - loop);
nanosleep(&tspec, NULL);
}
+ dbg("error connecting to daemon, start daemon failed");
+ goto exit;
- info("message is still in the ipc queue, starting daemon...");
- retval = start_daemon();
+send:
+ size = build_hotplugmsg(&message, action, devpath, subsystem, seq);
+ retval = send(sock, &message, size, 0);
+ if (retval == -1) {
+ dbg("error sending message");
+ close (sock);
+ goto exit;
+ }
+ close (sock);
+ return 0;
exit:
- return retval;
+ return 1;
}
===== test/udevd_test.sh 1.4 vs edited =====
--- 1.4/test/udevd_test.sh Tue Jan 27 08:29:28 2004
+++ edited/test/udevd_test.sh Wed Jan 28 22:23:22 2004
@@ -1,7 +1,7 @@
#!/bin/bash
# kill daemon, first event will start it again
-killall udevd
+#killall udevd
# 3 x connect/disconnect sequence of sda/sdb/sdc
next prev parent reply other threads:[~2004-01-28 21:47 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-01-25 20:03 [patch] udevd - cleanup and better timeout handling Kay Sievers
2004-01-26 18:22 ` Greg KH
2004-01-26 19:11 ` Kay Sievers
2004-01-26 19:28 ` Greg KH
2004-01-26 19:42 ` Kay Sievers
2004-01-26 22:26 ` Greg KH
2004-01-26 22:56 ` Kay Sievers
2004-01-27 6:56 ` Kay Sievers
2004-01-27 18:55 ` Greg KH
2004-01-27 19:08 ` Kay Sievers
2004-01-27 19:13 ` Greg KH
2004-01-28 21:47 ` Kay Sievers [this message]
2004-01-29 1:52 ` Kay Sievers
2004-01-29 1:56 ` Kay Sievers
2004-01-29 15:55 ` Kay Sievers
2004-01-31 2:42 ` Kay Sievers
2004-02-01 9:08 ` Greg KH
2004-02-01 18:16 ` Kay Sievers
2004-02-02 2:19 ` Kay Sievers
2004-02-02 8:21 ` Greg KH
2004-02-02 11:50 ` Kay Sievers
2004-02-02 18:45 ` Greg KH
2004-02-02 21:36 ` Kay Sievers
2004-02-03 1:26 ` Greg KH
2004-02-03 6:43 ` Ling, Xiaofeng
2004-02-03 20:12 ` Kay Sievers
2004-02-04 0:56 ` Greg KH
2004-02-08 9:43 ` Olaf Hering
2004-02-08 15:22 ` Kay Sievers
2004-02-08 15:40 ` Olaf Hering
2004-02-08 15:57 ` Kay Sievers
2004-02-08 16:09 ` Olaf Hering
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20040128214736.GA16070@vrfy.org \
--to=kay.sievers@vrfy.org \
--cc=linux-hotplug@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).