public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Felipe W Damasio <felipewd@terra.com.br>
To: Richard Procter <rnp@paradise.net.nz>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] Fix SMP support on 3c527 net driver, take 2
Date: Tue, 09 Sep 2003 10:14:31 -0300	[thread overview]
Message-ID: <3F5DD237.4040909@terra.com.br> (raw)
In-Reply-To: <Pine.LNX.4.21.0309091029230.252-100000@ps2.local>

	Hi Richard,

Richard Procter wrote:
> I've had a good look over your revised patch, and it looks fine to me. I
> didn't manage to get an MCA kernel booting to test it, but I'm not sure if
> it would have added a lot, especially as I don't have an SMP MCA machine.

	That's closer the a proper test box than me, since I don't even have 
an that NIC ;)

> That said, over the weekend I realised that the need to unroll wait_event
> was a consequence of using the same queue to perform two quite distinct
> functions: serialising the issuing of commands, and waiting for the card
> to complete command execution. This forces us to use a private variable to
> indicate which situation has occured. That's ok on UP, but requires us to
> jump through hoops to use it safely on SMP with spinlocks. 

	True, but since the patch uses a per-device lock, aren't we safe (and 
relatively scaleable) on SMP too?

	Using wait_event as "prepare_for_wait" and all that IMHO is needed 
because wait_event calls schedule, and we can't do that with our 
device lock held..hence the prepare_to_wait/spin_unlock/schedule stuff 
on mc32::wait_pending.

	I don't know if using 2 different locks is worth it, since we may 
starve mc_reload on SMP...but since the ammount of code dropped, it's 
worth testing to see if we scale better than the inline wait_pending 
stuff.

> I've rewritten things using completions (== semaphores?), and it's both
> cleaner and (unexpectedly) smaller (see example below). I'm in the process
> of convincing myself it all works; should have something out there by the
> end of the week.

	Great!

	Please let me know when you have something...I'm really enjoying 
helping to fix this bug. :)

> If there's a merge deadline coming up, please feel free to submit the
> patch, otherwise I'd like to hold off for a couple of days and see where
> we stand then.

	There isn't.

	Maybe we won't make 2.6.0-test5, but there's no need to rush things. 
Let's get this right.

	Kind Regards,

Felipe


      reply	other threads:[~2003-09-09 13:12 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-09-02 20:24 [PATCH] Fix SMP support on 3c527 net driver, take 2 Felipe W Damasio
2003-09-03 13:42 ` Felipe W Damasio
2003-09-04  6:33   ` Richard Procter
2003-09-04 12:52     ` Felipe W Damasio
2003-09-04 13:22       ` Alan Cox
2003-09-04 13:28         ` Felipe W Damasio
2003-09-08 19:58           ` Felipe W Damasio
2003-09-08 23:49             ` Richard Procter
2003-09-09 13:14               ` Felipe W Damasio [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3F5DD237.4040909@terra.com.br \
    --to=felipewd@terra.com.br \
    --cc=alan@lxorguk.ukuu.org.uk \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rnp@paradise.net.nz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox