From: GrandMasterLee <masterlee@digitalroadkill.net>
To: Andrew Vasquez <praka@san.rr.com>
Cc: linux-kernel@vger.kernel.org,
Michael Clark <michael@metaparadigm.com>,
J Sloan <joe@tmsusa.com>, Simon Roscic <simon.roscic@chello.at>,
Arjan van de Ven <arjanv@redhat.com>
Subject: Re: [Kernel 2.5] Qlogic 2x00 driver
Date: 16 Oct 2002 21:44:11 -0500 [thread overview]
Message-ID: <1034822651.27.3.camel@localhost> (raw)
In-Reply-To: <20021017015903.GA20960@praka.local.home>
On Wed, 2002-10-16 at 20:59, Andrew Vasquez wrote:
> > Yes, we have seen that ext3 is a stack hog in some cases, and I
> > know there were some fixes in later LVM versions to remove some
> > huge stack allocations. Arjan also reported stack problems with
> > qla2x00, so it is not a surprise that the combination causes
> > problems.
> >
> The stack issues were a major problem in the 5.3x series driver. I
> believe, I can check tomorrow, 5.38.9 (the driver Dell distributes)
> contains fixes for the stack clobbering -- qla2x00-rh1-3 also contain
> the fixes.
Does this mean that 6.01 will NOT work either? What drivers will be
affected? We've already made the move to remove LVM from the mix, but
your comments above give me some doubt as to how definite it is, that
the stack clobbering will be fixed by doing so.
> IAC, I believe the support tech working with MasterLee had asked
> for additional information regarding the configuration as well as
> some basic logs. Ideally we'd like to setup a similiar configuration
> in house and see what's happening...
In-house? Just curious. What can "I" do to know if our configuration
won't get broken, just by removing LVM? TIA.
> --
> Andrew Vasquez | praka@san.rr.com |
> DSS: 0x508316BB, FP: 79BD 4FAC 7E82 FF70 6C2B 7E8B 168F 5529 5083 16BB
next prev parent reply other threads:[~2002-10-17 2:38 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-10-15 19:20 [Kernel 2.5] Qlogic 2x00 driver Simon Roscic
2002-10-15 19:31 ` Arjan van de Ven
2002-10-15 19:53 ` Simon Roscic
2002-10-16 2:51 ` Michael Clark
2002-10-16 3:56 ` GrandMasterLee
2002-10-16 4:30 ` Michael Clark
2002-10-16 4:35 ` J Sloan
2002-10-16 4:43 ` GrandMasterLee
2002-10-16 6:03 ` Michael Clark
2002-10-16 6:31 ` GrandMasterLee
2002-10-16 6:40 ` Michael Clark
2002-10-16 6:48 ` GrandMasterLee
2002-10-16 6:59 ` Michael Clark
2002-10-16 4:58 ` GrandMasterLee
2002-10-16 5:28 ` Michael Clark
2002-10-16 5:40 ` Andreas Dilger
2002-10-17 1:59 ` Andrew Vasquez
2002-10-17 2:44 ` GrandMasterLee [this message]
2002-10-17 3:11 ` Andrew Vasquez
2002-10-17 3:42 ` GrandMasterLee
2002-10-17 9:40 ` Michael Clark
2002-10-18 6:45 ` GrandMasterLee
2002-10-16 16:28 ` Simon Roscic
2002-10-16 16:49 ` Michael Clark
2002-10-17 3:12 ` GrandMasterLee
2002-10-17 3:54 ` Michael Clark
2002-10-17 4:08 ` GrandMasterLee
2002-10-17 5:03 ` Michael Clark
2002-10-16 5:02 ` GrandMasterLee
2002-10-16 16:38 ` Simon Roscic
2002-10-17 3:08 ` GrandMasterLee
2002-10-17 17:47 ` Simon Roscic
2002-10-18 6:42 ` GrandMasterLee
2002-10-18 15:11 ` Simon Roscic
-- strict thread matches above, loose matches on Subject: below --
2002-10-19 2:17 rwhron
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1034822651.27.3.camel@localhost \
--to=masterlee@digitalroadkill.net \
--cc=arjanv@redhat.com \
--cc=joe@tmsusa.com \
--cc=linux-kernel@vger.kernel.org \
--cc=michael@metaparadigm.com \
--cc=praka@san.rr.com \
--cc=simon.roscic@chello.at \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox