public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <jens.axboe@oracle.com>
To: Lars Ellenberg <lars.ellenberg@linbit.com>
Cc: Satyam Sharma <satyam.sharma@gmail.com>,
	Kyle Moffett <mrlinuxman@mac.com>, Andrew Morton <akpm@osdl.org>,
	lkml <linux-kernel@vger.kernel.org>
Subject: Re: [DRIVER SUBMISSION] DRBD wants to go mainline
Date: Tue, 24 Jul 2007 09:36:51 +0200	[thread overview]
Message-ID: <20070724073650.GG3287@kernel.dk> (raw)
In-Reply-To: <20070723211947.GD6477@mail.linbit.com>

On Mon, Jul 23 2007, Lars Ellenberg wrote:
> On Mon, Jul 23, 2007 at 07:10:58PM +0530, Satyam Sharma wrote:
> > On 7/23/07, Lars Ellenberg <lars.ellenberg@linbit.com> wrote:
> > >On Sun, Jul 22, 2007 at 09:32:02PM -0400, Kyle Moffett wrote:
> > >[...]
> > >> Don't use signals between kernel threads, use proper primitives like
> > >> notifiers and waitqueues, which means you should also probably switch 
> > >away
> > >> from kernel_thread() to the kthread_*() APIs.  Also you should fix this
> > >> FIXME or remove it if it no longer applies:-D.
> > >
> > >right.
> > >but how to I tell a network thread in tcp_recvmsg to stop early,
> > >without using signals?
> > 
> > 
> > Yup, kthreads API cannot handle (properly stop) kernel threads that want
> > to sleep on possibly-blocking-forever-till-signalled-functions such as
> > tcp_recvmsg or skb_recv_datagram etc etc.
> > 
> > There are two workarounds:
> > 1. Use sk_rcvtimeo and related while-continue logic
> > 2. force_sig(SIGKILL) to your kernel thread just before kthread_stop
> >   (note that you don't need to allow / write code to handle / etc signals
> >   in your kthread code -- force_sig will work automatically)
> 
> this is not only at stop time.
> for example our "drbd_asender" thread
> does receive as well as send, and the sending
> latency is crucial to performance, while the recv
> will not timeout for the next few seconds.
> 
> > >> +/* THINK maybe we actually want to use the default "event/%s" worker 
> > >threads
> > >> + * or similar in linux 2.6, which uses per cpu data and threads.
> > >> + *
> > >> + * To be general, this might need a spin_lock member.
> > >> + * For now, please use the mdev->req_lock to protect list_head,
> > >> + * see drbd_queue_work below.
> > >> + */
> > >> +struct drbd_work_queue {
> > >> +       struct list_head q;
> > >> +       struct semaphore s; /* producers up it, worker down()s it */
> > >> +       spinlock_t q_lock;  /* to protect the list. */
> > >> +};
> > >>
> > >> Umm, how about fixing this to actually use proper workqueues or something
> > >> instead of this open-coded mess?
> > >
> > >unlikely to happen "right now".
> > >but it is on our todo list...
> > 
> > It should be easier to do it now (if you defer it for later, the code will
> > only grow more and more complex). Also, removing this gunk from
> > your driver will clearly make it smaller, and easier for us to review :-)
> 
> and will poison the generic work queues with stuff that might block
> somewhere deep in the tcp stack. and where we are not able to cancel it.
> not exactly desirable, either.
> but maybe I am missing something?

Create your own (single threaded) work queue using
create_singlethread_workqueue().

-- 
Jens Axboe


  reply	other threads:[~2007-07-24  7:36 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-07-21 20:38 [DRIVER SUBMISSION] DRBD wants to go mainline Lars Ellenberg
2007-07-21 21:17 ` Jan Engelhardt
2007-07-21 22:43   ` Lars Ellenberg
2007-07-22  9:06     ` Jan Engelhardt
2007-07-22 14:03       ` Lars Ellenberg
2007-07-27 18:46     ` Pavel Machek
2007-07-30 19:35       ` Lars Ellenberg
2007-07-30 19:41         ` Jan Engelhardt
2007-07-21 21:34 ` Jesper Juhl
2007-07-21 23:50 ` Andi Kleen
2007-07-22  5:52   ` Jens Axboe
2007-07-22 13:58     ` Lars Ellenberg
2007-07-22 14:49       ` Sam Ravnborg
2007-07-22 14:56       ` Andi Kleen
2007-07-22 15:31       ` Satyam Sharma
2007-07-22 15:50         ` Satyam Sharma
2007-07-22 16:13         ` Stefan Richter
2007-07-22  6:09   ` Lars Ellenberg
2007-07-22  8:52     ` Sam Ravnborg
2007-07-22  9:05       ` Pekka Enberg
2007-07-22  9:00     ` Pekka Enberg
2007-07-23  1:32 ` Kyle Moffett
2007-07-23  8:49   ` Christoph Hellwig
2007-07-23  9:00     ` Sam Ravnborg
2007-07-23  9:01       ` Christoph Hellwig
2007-07-23  9:19         ` Sam Ravnborg
2007-07-23 11:08           ` Lars Ellenberg
2007-07-23 13:32   ` Lars Ellenberg
2007-07-23 13:37     ` Jens Axboe
2007-07-23 21:13       ` Lars Ellenberg
2007-07-23 13:40     ` Satyam Sharma
2007-07-23 21:19       ` Lars Ellenberg
2007-07-24  7:36         ` Jens Axboe [this message]
2007-07-24 23:11         ` Satyam Sharma
2007-07-25  9:46           ` Lars Ellenberg
2007-07-25 12:12             ` Satyam Sharma
2007-07-26  2:03               ` david
2007-07-26  3:43                 ` Kyle Moffett
2007-07-26  9:17                   ` Evgeniy Polyakov
2007-07-24  0:48     ` Kyle Moffett
2007-07-23 20:59 ` Jesper Juhl
  -- strict thread matches above, loose matches on Subject: below --
2007-07-22  9:54 Tomasz Chmielewski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070724073650.GG3287@kernel.dk \
    --to=jens.axboe@oracle.com \
    --cc=akpm@osdl.org \
    --cc=lars.ellenberg@linbit.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mrlinuxman@mac.com \
    --cc=satyam.sharma@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox