From: Bill Davidsen <davidsen@tmr.com>
To: Jeff Zheng <Jeff.Zheng@endace.com>
Cc: Neil Brown <neilb@suse.de>,
david@lang.hm,
Michal Piotrowski <michal.k.k.piotrowski@gmail.com>,
Ingo Molnar <mingo@elte.hu>,
linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-fsdevel@vger.kernel.org
Subject: Re: Software raid0 will crash the file-system, when each disk is 5TB
Date: Tue, 22 May 2007 17:31:20 -0400 [thread overview]
Message-ID: <46536128.50504@tmr.com> (raw)
In-Reply-To: <659F626D666070439A4A5965CD6EBF406B352A@gazelle.ad.endace.com>
Jeff Zheng wrote:
> Fix confirmed, filled the whole 11T hard disk, without crashing.
> I presume this would go into 2.6.22
>
Since it results in a full loss of data, I would hope it goes into
2.6.21.x -stable.
> Thanks again.
>
> Jeff
>
>> -----Original Message-----
>> From: linux-raid-owner@vger.kernel.org
>> [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Jeff Zheng
>> Sent: Thursday, 17 May 2007 5:39 p.m.
>> To: Neil Brown; david@lang.hm; Michal Piotrowski; Ingo
>> Molnar; linux-raid@vger.kernel.org;
>> linux-kernel@vger.kernel.org; linux-fsdevel@vger.kernel.org
>> Subject: RE: Software raid0 will crash the file-system, when
>> each disk is 5TB
>>
>>
>> Yeah, seems you've locked it down, :D. I've written 600GB of
>> data now, and anything is still fine.
>> Will let it run overnight, and fill the whole 11T. I'll post
>> the result tomorrow
>>
>> Thanks a lot though.
>>
>> Jeff
>>
>>> -----Original Message-----
>>> From: Neil Brown [mailto:neilb@suse.de]
>>> Sent: Thursday, 17 May 2007 5:31 p.m.
>>> To: david@lang.hm; Jeff Zheng; Michal Piotrowski; Ingo Molnar;
>>> linux-raid@vger.kernel.org; linux-kernel@vger.kernel.org;
>>> linux-fsdevel@vger.kernel.org
>>> Subject: RE: Software raid0 will crash the file-system,
>> when each disk
>>> is 5TB
>>>
>>> On Thursday May 17, neilb@suse.de wrote:
>>>> Uhm, I just noticed something.
>>>> 'chunk' is unsigned long, and when it gets shifted up, we
>>> might lose
>>>> bits. That could still happen with the 4*2.75T
>> arrangement, but is
>>>> much more likely in the 2*5.5T arrangement.
>>> Actually, it cannot be a problem with the 4*2.75T arrangement.
>>> chuck << chunksize_bits
>>>
>>> will not exceed the size of the underlying device *in*kilobytes*.
>>> In that case that is 0xAE9EC800 which will git in a 32bit long.
>>> We don't double it to make sectors until after we add
>>> zone->dev_offset, which is "sector_t" and so 64bit
>> arithmetic is used.
>>> So I'm quite certain this bug will cause exactly the problems
>>> experienced!!
>>>
>>>> Jeff, can you try this patch?
>>> Don't bother about the other tests I mentioned, just try this one.
>>> Thanks.
>>>
>>> NeilBrown
>>>
>>>> Signed-off-by: Neil Brown <neilb@suse.de>
>>>>
>>>> ### Diffstat output
>>>> ./drivers/md/raid0.c | 2 +-
>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff .prev/drivers/md/raid0.c ./drivers/md/raid0.c
>>>> --- .prev/drivers/md/raid0.c 2007-05-17
>>> 10:33:30.000000000 +1000
>>>> +++ ./drivers/md/raid0.c 2007-05-17 15:02:15.000000000 +1000
>>>> @@ -475,7 +475,7 @@ static int raid0_make_request (request_q
>>>> x = block >> chunksize_bits;
>>>> tmp_dev = zone->dev[sector_div(x, zone->nb_dev)];
>>>> }
>>>> - rsect = (((chunk << chunksize_bits) + zone->dev_offset)<<1)
>>>> + rsect = ((((sector_t)chunk << chunksize_bits) +
>>>> +zone->dev_offset)<<1)
>>>> + sect_in_chunk;
>>>>
>>>> bio->bi_bdev = tmp_dev->bdev;
>> -
>> To unsubscribe from this list: send the line "unsubscribe
>> linux-raid" in the body of a message to
>> majordomo@vger.kernel.org More majordomo info at
>> http://vger.kernel.org/majordomo-info.html
>>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Bill Davidsen <davidsen@tmr.com>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot
next prev parent reply other threads:[~2007-05-22 21:29 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-05-15 23:09 Software raid0 will crash the file-system, when each disk is 5TB Jeff Zheng
2007-05-15 23:29 ` Michal Piotrowski
2007-05-16 0:03 ` Neil Brown
2007-05-16 1:56 ` Jeff Zheng
2007-05-16 17:28 ` Bill Davidsen
2007-05-16 17:58 ` david
2007-05-17 0:48 ` Neil Brown
2007-05-17 2:09 ` Jeff Zheng
2007-05-17 2:45 ` Neil Brown
2007-05-17 3:11 ` Jeff Zheng
2007-05-17 4:32 ` Neil Brown
2007-05-17 5:08 ` Jeff Zheng
2007-05-17 4:45 ` david
2007-05-17 5:03 ` Neil Brown
2007-05-17 5:31 ` Neil Brown
2007-05-17 5:38 ` Jeff Zheng
2007-05-17 22:55 ` Jeff Zheng
2007-05-18 0:21 ` Neil Brown
2007-05-22 21:31 ` Bill Davidsen [this message]
2007-05-16 14:04 ` Andreas Dilger
2007-05-16 18:04 ` david
2007-05-16 18:16 ` Jan Engelhardt
2007-05-16 21:42 ` Jeff Zheng
2007-05-17 7:21 ` Jan Engelhardt
2007-05-17 11:11 ` Neil Brown
2007-05-17 15:30 ` Jan Engelhardt
2007-05-16 21:44 ` Jeff Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=46536128.50504@tmr.com \
--to=davidsen@tmr.com \
--cc=Jeff.Zheng@endace.com \
--cc=david@lang.hm \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=michal.k.k.piotrowski@gmail.com \
--cc=mingo@elte.hu \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).