linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RE: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
@ 2003-10-18 16:54 Mudama, Eric
  2003-10-18 18:19 ` Maciej Zenczykowski
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Mudama, Eric @ 2003-10-18 16:54 UTC (permalink / raw)
  To: 'John Bradford', Krzysztof Halasa
  Cc: Rogier Wolff, Norman Diamond, Hans Reiser, Wes Janzen,
	linux-kernel



> -----Original Message-----
> From: John Bradford [mailto:john@grabjohn.com]
>
> Drive manufacturers could sell advanced firmware to data recovery
> companies for a price that would pay for itself after 3-4 data
> recovery jobs.  Given that you could then do far more advanced
> recovery then people could themselves, I am suprised this hasn't
> happened before.  Of course, free and open firmware would be nice in
> general, but that hasn't arrived yet.

To pay for itself it would have to cost multiple millions of dollars.  The
#1 constraint in an IDE drive is cost per gigabyte, since 99.9% of
purchasers don't look at anything else.  This means that we strip down
things like our electronics and internal mask ROMs to their minimum required
size.  Specialized code with extra features would inherently be larger,
which gives two choices:

1. burdon 60 million drives per year with the capability to run this
software
2. 1-off or 2-off or whatever for the few times a year that you get asked
for this

#1 is prohibitive from a cost perspective since the demand simply isn't that
high.  #2 is prohibitive because of the engineering and manufacturing
resources required to build a special product.

Plus, all data recovery would be on drives already sold...  Since every
drive optimizes itself as part of the manufacturing process to the exact
capabilities of the channel ASIC, heads they were manufactured with, etc,
the only way for these new recovery tools to work reliably would be to use
option #1 above, which I've already said isn't worth the cost.  I hear about
people swapping PCBs on disk drives to recover data when one fries... yes
this can work to some degree, but I absolutely wouldn't trust anything
written in a swapped-board setup.

The community of knowledgable users who could use such features and would be
willing to pay, say, $20 extra for the cost, is nothing next to the number
of users who go to Dell's website and say "this drive is 20GB more for $10
less, lets get this one!"

> Although, to be honest, except where performance is critical, remap on
> read is pointless.  It saves you from having to identify the bad block
> again when you write to it.  Generally, guaranteed remap on write is
> what I want.  What happens on read is less important if your data
> isn't intact.  I can see your point of view for not re-mapping on read
> given that advanced firmwares are not available, and the fact that it
> allows you to do some form of data recovery.  Overall, though, if it
> gets to the point where you have to start doing such data recovery,
> downtime is usually significant, and for some applications, having the
> data in a week's time may be little more than useless.  Predicting
> possible disk fauliures is a good idea.

Writes are destructive, and very often "fix" the problem on the media.  If
the write succeeds, and can be read by the disk, there's no point in
remapping.  It is only when you're unable to write to a specific area that
remap-on-write makes any sense.

We keep track of where we have trouble reading or writing, and use that to
reassign based on various criteria automatically.

Best data to use, I'd guess, for "predicting" failures, is the blown rev
counter in smart.  If you're blowing revs, you're having trouble getting the
data you want off or onto the drive.

--eric

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
@ 2003-10-18 17:18 Mudama, Eric
  2003-10-18 18:06 ` Matthias Urlichs
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Mudama, Eric @ 2003-10-18 17:18 UTC (permalink / raw)
  To: 'Nuno Silva', linux-kernel



> -----Original Message-----
> From: Nuno Silva [mailto:nuno.silva@vgertech.com]
>
> > 
> > Doing cat /dev/zero > /dev/hd* fixes all bad sectors on 
> modern drive.
> 
> Yeah! I'm doing this right now because the data in hda is 
> very important 
> and and don't do backups since August!! :-D

If current trends hold, in the next few years, hard drives are going to have
to pick up and rewrite their data continuously to avoid signal decay on the
media... a drive gets closer and closer to a DRAM cell than a stone tablet.
(And yes, I've heard all the jokes about bricks/stones/etc)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
  2003-10-18 17:18 Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?) Mudama, Eric
@ 2003-10-18 18:06 ` Matthias Urlichs
  2003-10-20 15:54 ` Rik van Riel
  2003-10-21 19:10 ` H. Peter Anvin
  2 siblings, 0 replies; 15+ messages in thread
From: Matthias Urlichs @ 2003-10-18 18:06 UTC (permalink / raw)
  To: linux-kernel

Hi, Mudama, Eric wrote:

> If current trends hold, in the next few years, hard drives are going to have
> to pick up and rewrite their data continuously to avoid signal decay on the
> media...

I expect I'd be VERY unhappy if I couldn't put a complete computer in
storage any more, and expect it to work when I turn it back on in two
months / years.

What timeframe are you talking about here anyway?

Oh well, I do remember the times when disks didn't work the next
_day_ because they developed stiction and the only way to get them to run
again was to peel off the label near the center and give the thing a
not-so-gentle push with a screwdriver... in fact we had a contest how long
an 80-MB disk would continue to work with the top off. :-)

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  smurf@smurf.noris.de
Disclaimer: The quote was selected randomly. Really. | http://smurf.noris.de
 - -
:mouse belt: n. See {rat belt}.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
  2003-10-18 16:54 Mudama, Eric
@ 2003-10-18 18:19 ` Maciej Zenczykowski
  2003-10-18 20:08 ` John Bradford
  2003-10-19 22:53 ` Pavel Machek
  2 siblings, 0 replies; 15+ messages in thread
From: Maciej Zenczykowski @ 2003-10-18 18:19 UTC (permalink / raw)
  To: Mudama, Eric
  Cc: 'John Bradford', Krzysztof Halasa, Rogier Wolff,
	Norman Diamond, Hans Reiser, Wes Janzen, linux-kernel

> To pay for itself it would have to cost multiple millions of dollars.  The
> #1 constraint in an IDE drive is cost per gigabyte, since 99.9% of
> purchasers don't look at anything else.  This means that we strip down
> things like our electronics and internal mask ROMs to their minimum required
> size.  Specialized code with extra features would inherently be larger,
> which gives two choices:

Would a single command to read a sector ignoring drive remaps really be 
that hard/expensive/large in size to implement?  I'd expect this would 
easily fit in the spare room at the end of the ROM - the function is 
pretty much (no doubt) already implemented - it just lacks an external 
interface.

I think this is all we really need - sure more would be nice, but this 
would suffice for those who say remaps without reading in the data 
correctly first are bad.

Cheers,
MaZe.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
  2003-10-18 16:54 Mudama, Eric
  2003-10-18 18:19 ` Maciej Zenczykowski
@ 2003-10-18 20:08 ` John Bradford
  2003-10-19 22:53 ` Pavel Machek
  2 siblings, 0 replies; 15+ messages in thread
From: John Bradford @ 2003-10-18 20:08 UTC (permalink / raw)
  To: Mudama, Eric, Krzysztof Halasa
  Cc: Rogier Wolff, Norman Diamond, Hans Reiser, Wes Janzen,
	linux-kernel

> Plus, all data recovery would be on drives already sold...  Since every
> drive optimizes itself as part of the manufacturing process to the exact
> capabilities of the channel ASIC, heads they were manufactured with, etc,
> the only way for these new recovery tools to work reliably would be to use
> option #1 above, which I've already said isn't worth the cost.  I hear about
> people swapping PCBs on disk drives to recover data when one fries... yes
> this can work to some degree, but I absolutely wouldn't trust anything
> written in a swapped-board setup.

Ah, OK, this is interesting, so basically it's not realistic to
produce 'data recovery PCBs' for $5000 each, which allow direct
head-seeks, and raw data extraction etc.  Fair enough.  I'm not really
interested in data recovery at this level, to be honest, something is
very wrong if backups haven't been made, and dieing drives been
detected long before then.

> > Although, to be honest, except where performance is critical, remap on
> > read is pointless.  It saves you from having to identify the bad block
> > again when you write to it.  Generally, guaranteed remap on write is
> > what I want.  What happens on read is less important if your data
> > isn't intact.  I can see your point of view for not re-mapping on read
> > given that advanced firmwares are not available, and the fact that it
> > allows you to do some form of data recovery.  Overall, though, if it
> > gets to the point where you have to start doing such data recovery,
> > downtime is usually significant, and for some applications, having the
> > data in a week's time may be little more than useless.  Predicting
> > possible disk fauliures is a good idea.
> 
> Writes are destructive, and very often "fix" the problem on the media.  If
> the write succeeds, and can be read by the disk, there's no point in
> remapping.

I totally agree - the world outside the drive doesn't even need to
know this, and the drive should be trusted to make the right decision,
and be critical about it, (I.E. use an area previously thought to be
bad, but only if it tests _really_ good now, not marginal requiring
multiple reads and every last bit of error correction possible to get
the data back).

I am not saying that it's a great thing to risk re-using an area that
was previously bad, far from it, but the place to make the decision as
to whether the area is indeed bad or not is inside the drive, based on
all the data it has available, not from the host, based on the
limited, interpreted data available from the drive.

> It is only when you're unable to write to a specific area that
> remap-on-write makes any sense.

But it is very important, (in my opinion), that it _does_ remap in
that case.  Specifically, I don't think that an unrecoverable read
error should prevent any future writes to that LBA address from
succeeding, unless the drive is out of spare blocks.

Reporting a write failure to the user should never happen on a drive
capable of defect management.

> We keep track of where we have trouble reading or writing, and use that to
> reassign based on various criteria automatically.
> 
> Best data to use, I'd guess, for "predicting" failures, is the blown rev
> counter in smart.  If you're blowing revs, you're having trouble getting the
> data you want off or onto the drive.

Which attribute is that?  I can't see anything like that in the SMART
output from a Maxtor disk, but it sounds like a useful measurement :-/.

John.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
  2003-10-18 16:54 Mudama, Eric
  2003-10-18 18:19 ` Maciej Zenczykowski
  2003-10-18 20:08 ` John Bradford
@ 2003-10-19 22:53 ` Pavel Machek
  2 siblings, 0 replies; 15+ messages in thread
From: Pavel Machek @ 2003-10-19 22:53 UTC (permalink / raw)
  To: Mudama, Eric
  Cc: 'John Bradford', Krzysztof Halasa, Rogier Wolff,
	Norman Diamond, Hans Reiser, Wes Janzen, linux-kernel

Hi!

> option #1 above, which I've already said isn't worth the cost.  I hear about
> people swapping PCBs on disk drives to recover data when one fries... yes
> this can work to some degree, but I absolutely wouldn't trust anything
> written in a swapped-board setup.

Which is okay, if you are doing this, you are not trying to "save" the
disk, you are trying to save the data... Which reminds me I should
back up hitachi hdd in my notebook. It already hates me for the stuff
I done with it. [Take a walk with running notebook in bag because this
damn beast pretends to be powered off, and will magically come back
few seconds later... Run notebook at near critical temperature for ~10
hours -- resulting in severe disk errors because drive failed to
detect it is overheated and tried to operate....]
								Pavel
-- 
When do you have a heart between your knees?
[Johanka's followup: and *two* hearts?]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
  2003-10-18 17:18 Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?) Mudama, Eric
  2003-10-18 18:06 ` Matthias Urlichs
@ 2003-10-20 15:54 ` Rik van Riel
  2003-10-20 16:09   ` Richard B. Johnson
  2003-10-21 19:10 ` H. Peter Anvin
  2 siblings, 1 reply; 15+ messages in thread
From: Rik van Riel @ 2003-10-20 15:54 UTC (permalink / raw)
  To: Mudama, Eric; +Cc: 'Nuno Silva', linux-kernel

On Sat, 18 Oct 2003, Mudama, Eric wrote:

> If current trends hold, in the next few years, hard drives are going to
> have to pick up and rewrite their data continuously to avoid signal
> decay on the media... a drive gets closer and closer to a DRAM cell than
> a stone tablet.

If the current trends hold, most computers won't be powered
on long enough to read all the data that will fit on a disk.

-- 
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
  2003-10-20 15:54 ` Rik van Riel
@ 2003-10-20 16:09   ` Richard B. Johnson
  2003-10-20 16:24     ` Chris Friesen
  2003-10-20 17:49     ` John Bradford
  0 siblings, 2 replies; 15+ messages in thread
From: Richard B. Johnson @ 2003-10-20 16:09 UTC (permalink / raw)
  To: Rik van Riel; +Cc: Mudama, Eric, 'Nuno Silva', linux-kernel

On Mon, 20 Oct 2003, Rik van Riel wrote:

> On Sat, 18 Oct 2003, Mudama, Eric wrote:
>
> > If current trends hold, in the next few years, hard drives are going to
> > have to pick up and rewrite their data continuously to avoid signal
> > decay on the media... a drive gets closer and closer to a DRAM cell than
> > a stone tablet.
>
> If the current trends hold, most computers won't be powered
> on long enough to read all the data that will fit on a disk.

Yeah, with a demonstrated 30 year MTBF of the power-grid and
standby power we might make it.........

Battery-backed SRAM "drives" in the gigabyte sizes already exist.
Terabytes should not be too far off.

Soon those "drives" will be as cheap as their mechanical emulations
and you won't need those metal boxes with the rotating mass anymore.
The batteries last about 10 years. Better than most mechanical
drives.

Cheers,
Dick Johnson
Penguin : Linux version 2.4.22 on an i686 machine (797.90 BogoMips).
            Note 96.31% of all statistics are fiction.



^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
@ 2003-10-20 16:14 Mudama, Eric
  0 siblings, 0 replies; 15+ messages in thread
From: Mudama, Eric @ 2003-10-20 16:14 UTC (permalink / raw)
  To: 'root@chaos.analogic.com', Rik van Riel
  Cc: 'Nuno Silva', linux-kernel



> -----Original Message-----
>
> Battery-backed SRAM "drives" in the gigabyte sizes already exist.
> Terabytes should not be too far off.
> 
> Soon those "drives" will be as cheap as their mechanical emulations
> and you won't need those metal boxes with the rotating mass anymore.
> The batteries last about 10 years. Better than most mechanical
> drives.

I'm looking forward to a solid state primary hard drive.

However, they've been saying that solid state will replace mechanical for
close to 10 years now... yet our mechanical drives have doubled in size
twice in under 3 years...

I'm sure it'll happen someday, but it may be 5-10 years before it actually
happens.

--eric

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
  2003-10-20 16:09   ` Richard B. Johnson
@ 2003-10-20 16:24     ` Chris Friesen
  2003-10-21 19:13       ` H. Peter Anvin
  2003-10-20 17:49     ` John Bradford
  1 sibling, 1 reply; 15+ messages in thread
From: Chris Friesen @ 2003-10-20 16:24 UTC (permalink / raw)
  To: root; +Cc: Rik van Riel, Mudama, Eric, 'Nuno Silva', linux-kernel

Richard B. Johnson wrote:

> Battery-backed SRAM "drives" in the gigabyte sizes already exist.
> Terabytes should not be too far off.
> 
> Soon those "drives" will be as cheap as their mechanical emulations
> and you won't need those metal boxes with the rotating mass anymore.
> The batteries last about 10 years. Better than most mechanical
> drives.

I'm dubious.  Ram costs about 150-200X as much as hard drives.  I don't 
see that changing.

Chris


-- 
Chris Friesen                    | MailStop: 043/33/F10
Nortel Networks                  | work: (613) 765-0557
3500 Carling Avenue              | fax:  (613) 765-2986
Nepean, ON K2H 8E9 Canada        | email: cfriesen@nortelnetworks.com


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
  2003-10-20 17:49     ` John Bradford
@ 2003-10-20 17:48       ` David Lang
  2003-10-20 18:29         ` John Bradford
  0 siblings, 1 reply; 15+ messages in thread
From: David Lang @ 2003-10-20 17:48 UTC (permalink / raw)
  To: John Bradford
  Cc: Richard B. Johnson, Rik van Riel, Mudama, Eric,
	'Nuno Silva', linux-kernel

rotating storage is hitting $1 per gig, memory is running ~$100/gig
(substantially more for the highest density memory)

making a small solid state drive is easy, cheap and definantly has some
uses, but making something that will replace stacks of 300G drives is
neither cheap or easy.

David Lang


On Mon, 20 Oct 2003, John Bradford wrote:

> Date: Mon, 20 Oct 2003 18:49:26 +0100
> From: John Bradford <john@grabjohn.com>
> To: Richard B. Johnson <root@chaos.analogic.com>,
>      Rik van Riel <riel@redhat.com>
> Cc: "Mudama, Eric" <eric_mudama@Maxtor.com>,
>      'Nuno Silva' <nuno.silva@vgertech.com>, linux-kernel@vger.kernel.org
> Subject: RE: Blockbusting news,
>      this is important (Re: Why are bad disk se ctors numbered strangely,
>      and what happens to them?)
>
> > Battery-backed SRAM "drives" in the gigabyte sizes already exist.
> > Terabytes should not be too far off.
> >
> > Soon those "drives" will be as cheap as their mechanical emulations
> > and you won't need those metal boxes with the rotating mass anymore.
> > The batteries last about 10 years. Better than most mechanical
> > drives.
>
> You could make a solid state device really cheaply yourself - all you
> need is a simple circuit that will allow you to connect 512 Mb of
> EPROMs to the parallel port, and write a device driver to make them
> appear as a block device.  If you wan to boot from it, just find any
> old network card with a boot PROM socket, write a bootloader which
> could read a kernel image from the parallel port connected device,
> write that bootloader to a PROM, and put it on the network card.
>
> John.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

-- 
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
  2003-10-20 16:09   ` Richard B. Johnson
  2003-10-20 16:24     ` Chris Friesen
@ 2003-10-20 17:49     ` John Bradford
  2003-10-20 17:48       ` David Lang
  1 sibling, 1 reply; 15+ messages in thread
From: John Bradford @ 2003-10-20 17:49 UTC (permalink / raw)
  To: Richard B. Johnson, Rik van Riel
  Cc: Mudama, Eric, 'Nuno Silva', linux-kernel

> Battery-backed SRAM "drives" in the gigabyte sizes already exist.
> Terabytes should not be too far off.
> 
> Soon those "drives" will be as cheap as their mechanical emulations
> and you won't need those metal boxes with the rotating mass anymore.
> The batteries last about 10 years. Better than most mechanical
> drives.

You could make a solid state device really cheaply yourself - all you
need is a simple circuit that will allow you to connect 512 Mb of
EPROMs to the parallel port, and write a device driver to make them
appear as a block device.  If you wan to boot from it, just find any
old network card with a boot PROM socket, write a bootloader which
could read a kernel image from the parallel port connected device,
write that bootloader to a PROM, and put it on the network card.

John.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
  2003-10-20 17:48       ` David Lang
@ 2003-10-20 18:29         ` John Bradford
  0 siblings, 0 replies; 15+ messages in thread
From: John Bradford @ 2003-10-20 18:29 UTC (permalink / raw)
  To: David Lang
  Cc: Richard B. Johnson, Rik van Riel, Mudama, Eric,
	'Nuno Silva', linux-kernel

Quote from David Lang <david.lang@digitalinsight.com>:
> rotating storage is hitting $1 per gig, memory is running ~$100/gig
> (substantially more for the highest density memory)
> 
> making a small solid state drive is easy, cheap and definantly has some
> uses, but making something that will replace stacks of 300G drives is
> neither cheap or easy.

Maybe one day local non-volitile storage won't even matter.

For example, say you were setting up a, (partial), mirror of kernel.org.

If you already had several machines in a datacentre, you could install
another one with no disks at all, just 4 GB of RAM, and configure it
to boot over the lan, loading the root filesystem in to a ramdisk.

Once booted, it could retrieve the parts of kernel.org that you wanted
to serve from a trusted mirror site, and begin serving.

Other such machines could use your machine as a trusted mirror site,
and eventually you could have lots of these machines all holding their
partial mirror of kernel.org in RAM.

As long as there is at least one on-line, any others can go down and
come up, and it doesn\'t really matter - they will just re-sync with
another node.

Of course, this would use up a lot of network bandwidth, but in the
future that may not matter.

Or, a more practical usage would be a load balanced cluster of
webservers - why bother with non-volitile storage in all of them?
Some of them could serve entirely from RAM, having booted over the
LAN.

John.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
  2003-10-18 17:18 Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?) Mudama, Eric
  2003-10-18 18:06 ` Matthias Urlichs
  2003-10-20 15:54 ` Rik van Riel
@ 2003-10-21 19:10 ` H. Peter Anvin
  2 siblings, 0 replies; 15+ messages in thread
From: H. Peter Anvin @ 2003-10-21 19:10 UTC (permalink / raw)
  To: linux-kernel

Followup to:  <785F348679A4D5119A0C009027DE33C105CDB2EF@mcoexc04.mlm.maxtor.com>
By author:    "Mudama, Eric" <eric_mudama@Maxtor.com>
In newsgroup: linux.dev.kernel
> 
> If current trends hold, in the next few years, hard drives are going to have
> to pick up and rewrite their data continuously to avoid signal decay on the
> media... a drive gets closer and closer to a DRAM cell than a stone tablet.
> (And yes, I've heard all the jokes about bricks/stones/etc)
> 

Quite frankly, I think you'll have a hideously hard time selling that
to customers, once a few of them have lost their Quicken records due
to having had their computers turned off/disconnected for some time.

	-hpa
-- 
<hpa@transmeta.com> at work, <hpa@zytor.com> in private!
If you send me mail in HTML format I will assume it's spam.
"Unix gives you enough rope to shoot yourself in the foot."
Architectures needed: ia64 m68k mips64 ppc ppc64 s390 s390x sh v850 x86-64

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?)
  2003-10-20 16:24     ` Chris Friesen
@ 2003-10-21 19:13       ` H. Peter Anvin
  0 siblings, 0 replies; 15+ messages in thread
From: H. Peter Anvin @ 2003-10-21 19:13 UTC (permalink / raw)
  To: linux-kernel

Followup to:  <3F940C42.7080308@nortelnetworks.com>
By author:    Chris Friesen <cfriesen@nortelnetworks.com>
In newsgroup: linux.dev.kernel
>
> Richard B. Johnson wrote:
> 
> > Battery-backed SRAM "drives" in the gigabyte sizes already exist.
> > Terabytes should not be too far off.
> > 
> > Soon those "drives" will be as cheap as their mechanical emulations
> > and you won't need those metal boxes with the rotating mass anymore.
> > The batteries last about 10 years. Better than most mechanical
> > drives.
> 
> I'm dubious.  Ram costs about 150-200X as much as hard drives.  I don't 
> see that changing.
> 

Not without a completely disruptive technology change, which is always
possible; MRAM is one possibility.

Having nonvolatile storage with access times near current DRAM speeds
and cost/densities near current disk would change the computer
industry in a very fundamental way, not the least because current
operating systems make the memory/disk dichotomy very visible.

	-hpa
-- 
<hpa@transmeta.com> at work, <hpa@zytor.com> in private!
If you send me mail in HTML format I will assume it's spam.
"Unix gives you enough rope to shoot yourself in the foot."
Architectures needed: ia64 m68k mips64 ppc ppc64 s390 s390x sh v850 x86-64

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2003-10-21 19:13 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-10-18 17:18 Blockbusting news, this is important (Re: Why are bad disk se ctors numbered strangely, and what happens to them?) Mudama, Eric
2003-10-18 18:06 ` Matthias Urlichs
2003-10-20 15:54 ` Rik van Riel
2003-10-20 16:09   ` Richard B. Johnson
2003-10-20 16:24     ` Chris Friesen
2003-10-21 19:13       ` H. Peter Anvin
2003-10-20 17:49     ` John Bradford
2003-10-20 17:48       ` David Lang
2003-10-20 18:29         ` John Bradford
2003-10-21 19:10 ` H. Peter Anvin
  -- strict thread matches above, loose matches on Subject: below --
2003-10-20 16:14 Mudama, Eric
2003-10-18 16:54 Mudama, Eric
2003-10-18 18:19 ` Maciej Zenczykowski
2003-10-18 20:08 ` John Bradford
2003-10-19 22:53 ` Pavel Machek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).