From mboxrd@z Thu Jan 1 00:00:00 1970 From: Kay Sievers Date: Thu, 29 Jan 2004 01:52:32 +0000 Subject: Re: [patch] udevd - cleanup and better timeout handling Message-Id: <20040129015232.GA16558@vrfy.org> List-Id: References: <20040125200314.GA8376@vrfy.org> In-Reply-To: <20040125200314.GA8376@vrfy.org> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable To: linux-hotplug@vger.kernel.org On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote: > On Tue, Jan 27, 2004 at 11:13:09AM -0800, Greg KH wrote: > > On Tue, Jan 27, 2004 at 08:08:09PM +0100, Kay Sievers wrote: > > > On Tue, Jan 27, 2004 at 07:56:04AM +0100, Kay Sievers wrote: > > > > On Mon, Jan 26, 2004 at 11:28:19AM -0800, Greg KH wrote: > > > > > On Mon, Jan 26, 2004 at 08:11:10PM +0100, Kay Sievers wrote: > > > > > > On Mon, Jan 26, 2004 at 10:22:34AM -0800, Greg KH wrote: > > > > > > > On Sun, Jan 25, 2004 at 09:03:14PM +0100, Kay Sievers wrote: > > > > > > > > 1. We are much too slow. > > > > > > > > We want to exec the real udev in the background, but = a 'remove' > > > > > > > > is much much faster than a 'add', so we have a problem. > > > > > > > > Question: is it neccessary to order events for differe= nt devpath's? > > > > > > > > If no, we may wait_pid() for the exec only if we have = another udev > > > > > > > > working on the same devpath. > > > > > > >=20 > > > > > > > But how will we keep track of that? It's probably easier jus= t to wait > > > > > > > for each one to finish, right? > > > > > >=20 > > > > > > We leave the message in the queue until we reach the SIGCHLD fo= r this > > > > > > pid. So we can search the queue if we are already working on th= is devpath, > > > > > > and delay the new exec until the former exec comes back. > > > > >=20 > > > > > Ok, if that isn't too much trouble. > > > > >=20 > > > > > > Is it feasible to run in parallel for different paths, or can y= ou think > > > > > > of any problem? > > > > >=20 > > > > > I can't think of any problem with that. > > > >=20 > > > > Here is the next round. We have three queues now. All incoming mess= ages > > > > are queued in msg_list and if nothing is missing we move it to the > > > > running_list and exec in the background. > > > > If the exec comes back, it removes the message from the running_lis= t and > > > > frees the message. > > > >=20 > > > > Before we exec, we check the running_list if there is a udev runnin= g on > > > > the same device path. If yes, we move the message to the delay_list= . If > > > > the former exec comes back, we move the message to the running_list= and > > > > exec it. > > >=20 > > > Oh, sorry forget about it now. > > > I will come up with something better tested. > >=20 > > Oops, I just applied this version :) > >=20 > > I'll try testing this later on today. >=20 > Oh, couldn't resist to try threads. > It's a multithreaded udevd that communicates through a localhost socket. > The message includes a magic with the udev version, so we don't accept > older udevsend's. >=20 > No need for locking, cause we can't bind two sockets on the same address. > The daemon tries to connect and if it fails it starts the daemon. >=20 > We create a thread for every incoming connection, handle over the socket, > sort the messages in the global message queue and exit the thread. > Huh, that was easy with threads :) >=20 > With the addition of a message we wakeup the queue manager thread and > handle timeouts or move the message to the global exec list. This wakes > up the exec list manager who looks if a process is already running for th= is > device path. > If yes, the exec is delayed otherwise we create a thread that execs udev. > n the background. With the return of udev we free the message and wakeup > the exec list manager to look if something is pending. >=20 > It is just a quick shot, cause I couldn't solve the problems with fork an > scheduling and I wanted to see if I'm to stupid :) > But if anybody with a better idea or more experience with I/O scheduling > we may go another way. The remaining problem is that klibc doesn't support > threads. >=20 > By now, we don't exec anything, it's just a sleep 3 for every exec, > but you can see the queue management by watching syslog and do: >=20 > DEVPATH=3D/abc ACTION=ADd SEQNUM=3D0 ./udevsend /abc Here is the next version, that also does the exec. Please have a look. thanks, Kay ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Linux-hotplug-devel mailing list http://linux-hotplug.sourceforge.net Linux-hotplug-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-hotplug-devel