* xfs_force_shutdown after Raid crash
@ 2009-01-30 21:53 Steffen Knauf
2009-01-31 10:57 ` Christoph Hellwig
0 siblings, 1 reply; 28+ messages in thread
From: Steffen Knauf @ 2009-01-30 21:53 UTC (permalink / raw)
To: xfs
Hello,
after a raid crash (Raid Controller problem, 3 Disks of the Disk Group
were kicked out oft the diskgroup), 2 of 3 partitions (XFS FS) were
shutdown immediately.
Perhaps somebody has a idea, what's the best solution (xfs_repair?).
here a small output of the logfile:
------------------------------------------------------------------------------------------
kernel: Call Trace: <ffffffff882c4d21>{:xfs:xfs_free_ag_extent+1081}
kernel: <ffffffff882c632e>{:xfs:xfs_free_extent+186}
<ffffffff882ebf66>{:xfs:xfs_efd_init+54}
kernel: <ffffffff88303acd>{:xfs:xfs_trans_get_efd+36}
<ffffffff882d26a5>{:xfs:xfs_bmap_finish+240}
kernel: <ffffffff882f3bb9>{:xfs:xfs_itruncate_finish+365}
<ffffffff8830bb87>{:xfs:xfs_inactive+558}
kernel: <ffffffff88311605>{:xfs:validate_fields+39}
<ffffffff8831406e>{:xfs:linvfs_clear_inode+165}
kernel: <ffffffff8019194a>{clear_inode+210}
<ffffffff80191a62>{generic_delete_inode+231}
kernel: <ffffffff80189f64>{do_unlinkat+213}
<ffffffff8010a7be>{system_call+126}
kernel: xfs_force_shutdown(sdb2,0x8) called from line 4200 of file
fs/xfs/xfs_bmap.c. Return address = 0xffffffff882d26e7
kernel: Filesystem "sdb2": Corruption of in-memory data detected.
Shutting down filesystem: sdb2
kernel: Please umount the filesystem, and rectify the problem(s)
kernel: XFS internal error XFS_WANT_CORRUPTED_RETURN at line 298 of file
fs/xfs/xfs_alloc.c. Caller 0xffffffff882c5b35
munich kernel:
munich kernel: Call Trace:
<ffffffff882c42a8>{:xfs:xfs_alloc_fixup_trees+700}
munich kernel: <ffffffff882db2b7>{:xfs:xfs_btree_init_cursor+49}
<ffffffff882c5b35>{:xfs:xfs_alloc_ag_vextent+3015}
munich kernel: <ffffffff802d035d>{__down_read+18}
<ffffffff882c661b>{:xfs:xfs_alloc_vextent+719}
munich kernel: <ffffffff882d43e8>{:xfs:xfs_bmapi+5590}
<ffffffff882f8f94>{:xfs:xlog_write+1518}
munich kernel: <ffffffff88316edd>{:xfs:kmem_zone_zalloc+30}
<ffffffff8830dae5>{:xfs:xfs_iomap_write_allocate+521}
munich kernel: <ffffffff8015aaf2>{mempool_alloc+49}
<ffffffff8830cc1a>{:xfs:xfs_iomap+762}
munich kernel: <ffffffff8830e5ea>{:xfs:xfs_map_blocks+53}
<ffffffff80177086>{alternate_node_alloc+112}
munich kernel: <ffffffff8830e940>{:xfs:xfs_page_state_convert+695}
munich kernel: <ffffffff80177086>{alternate_node_alloc+112}
<ffffffff801e478d>{cfq_set_request+619}
munich kernel: <ffffffff801eedd3>{swiotlb_map_sg+55}
<ffffffff880b16a3>{:mptscsih:mptscsih_qcmd+1399}
munich kernel: <ffffffff8830f3fd>{:xfs:linvfs_writepage+167}
<ffffffff8019bb7b>{mpage_writepages+435}
munich kernel: <ffffffff8830f356>{:xfs:linvfs_writepage+0}
<ffffffff801da7c1>{generic_make_request+339}
munich kernel: <ffffffff801dc34c>{submit_bio+186}
<ffffffff8015db47>{do_writepages+41}
munich kernel: <ffffffff8019a4c2>{__writeback_single_inode+449}
<ffffffff801293ea>{default_wake_function+0}
munich kernel: <ffffffff88312420>{:xfs:xfs_bdstrat_cb+55}
<ffffffff882fa6f2>{:xfs:xfs_log_need_covered+82}
munich kernel: <ffffffff8019aa62>{sync_sb_inodes+469}
<ffffffff80143b81>{keventd_create_kthread+0}
munich kernel: <ffffffff8019af12>{writeback_inodes+130}
<ffffffff8015dcc6>{wb_kupdate+218}
munich kernel: <ffffffff802cf340>{thread_return+0}
<ffffffff8015e506>{pdflush+0}
munich kernel: <ffffffff8015e674>{pdflush+366}
<ffffffff8015dbec>{wb_kupdate+0}
munich kernel: <ffffffff80143e66>{kthread+236}
<ffffffff8010b84e>{child_rip+8}
munich kernel: <ffffffff80143b81>{keventd_create_kthread+0}
<ffffffff80143d7a>{kthread+0}
munich kernel: <ffffffff8010b846>{child_rip+0}
------------------------------------------------------------------------------------------
greets
Steffen
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-01-30 21:53 xfs_force_shutdown after Raid crash Steffen Knauf
@ 2009-01-31 10:57 ` Christoph Hellwig
2009-02-03 1:22 ` Michael Monnerie
2009-02-06 15:57 ` Steffen Knauf
0 siblings, 2 replies; 28+ messages in thread
From: Christoph Hellwig @ 2009-01-31 10:57 UTC (permalink / raw)
To: Steffen Knauf; +Cc: xfs
On Fri, Jan 30, 2009 at 10:53:19PM +0100, Steffen Knauf wrote:
> Hello,
>
> after a raid crash (Raid Controller problem, 3 Disks of the Disk Group
> were kicked out oft the diskgroup), 2 of 3 partitions (XFS FS) were
> shutdown immediately.
> Perhaps somebody has a idea, what's the best solution (xfs_repair?).
This looks like you were running with a write back cache enabled on the
controller / disks but without barriers. xfs_repair should be able
to repair the filesystem. If you're lucky only the freespace-btrees
are corrupted (as in the trace below) as xfs_repair can rebuild them
from scratch.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-01-31 10:57 ` Christoph Hellwig
@ 2009-02-03 1:22 ` Michael Monnerie
2009-02-03 3:13 ` Eric Sandeen
2009-02-06 15:57 ` Steffen Knauf
1 sibling, 1 reply; 28+ messages in thread
From: Michael Monnerie @ 2009-02-03 1:22 UTC (permalink / raw)
To: xfs
On Samstag 31 Januar 2009 Christoph Hellwig wrote:
> This looks like you were running with a write back cache enabled on
> the controller / disks but without barriers.
I've read that this is dangerous. How can I tell if I suffer the same? I
use an Areca 1680 SAS RAID Controller with 2GB Cache, so there could be
a lot of writes in it.
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-03 1:22 ` Michael Monnerie
@ 2009-02-03 3:13 ` Eric Sandeen
2009-02-03 9:22 ` Michael Monnerie
0 siblings, 1 reply; 28+ messages in thread
From: Eric Sandeen @ 2009-02-03 3:13 UTC (permalink / raw)
To: Michael Monnerie; +Cc: xfs
Michael Monnerie wrote:
> On Samstag 31 Januar 2009 Christoph Hellwig wrote:
>> This looks like you were running with a write back cache enabled on
>> the controller / disks but without barriers.
>
> I've read that this is dangerous. How can I tell if I suffer the same? I
> use an Areca 1680 SAS RAID Controller with 2GB Cache, so there could be
> a lot of writes in it.
>
> mfg zmi
you'd need to read the docs for your controller, to find out how to tell
if it has a writeback cache enabled, and whether it is batter-backed or not.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-03 3:13 ` Eric Sandeen
@ 2009-02-03 9:22 ` Michael Monnerie
2009-02-03 9:32 ` Christoph Hellwig
0 siblings, 1 reply; 28+ messages in thread
From: Michael Monnerie @ 2009-02-03 9:22 UTC (permalink / raw)
To: xfs
On Dienstag 03 Februar 2009 Eric Sandeen wrote:
> you'd need to read the docs for your controller, to find out how to
> tell if it has a writeback cache enabled, and whether it is
> batter-backed or not.
Sorry I didn't write that. Yes it's writeback (can be switched off) and
it's battery backed. Is there no danger then? Because in the mail from
Chris, he wrote he got problems because there was "with a write back
cache enabled on the controller / disks but without barriers". And I
thought the (not supported/used) barriers could be a problem.
I've re-read the FAQ now. It says it's recommended to turn off barrier
writes if you have battery-backed writeback, and I guess I'll do that.
So I misunderstood Chris.
But what about the hard disk cache - should that be disabled? I think in
case of a power failure, they just loose their cache contents, right? So
the battery-backed controller cache only helps himself, the disks will
just throw away up to the 32MB cache they have?
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-03 9:22 ` Michael Monnerie
@ 2009-02-03 9:32 ` Christoph Hellwig
2009-02-03 10:40 ` Michael Monnerie
0 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2009-02-03 9:32 UTC (permalink / raw)
To: Michael Monnerie; +Cc: xfs
On Tue, Feb 03, 2009 at 10:22:38AM +0100, Michael Monnerie wrote:
> But what about the hard disk cache - should that be disabled? I think in
> case of a power failure, they just loose their cache contents, right? So
> the battery-backed controller cache only helps himself, the disks will
> just throw away up to the 32MB cache they have?
Yes. I would hope raid controllers disable the write cache on disks,
but for lower end controllers I'm not sure they really do it.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-03 9:32 ` Christoph Hellwig
@ 2009-02-03 10:40 ` Michael Monnerie
2009-02-03 15:49 ` Christoph Hellwig
0 siblings, 1 reply; 28+ messages in thread
From: Michael Monnerie @ 2009-02-03 10:40 UTC (permalink / raw)
To: xfs
On Dienstag 03 Februar 2009 Christoph Hellwig wrote:
> Yes. I would hope raid controllers disable the write cache on disks,
> but for lower end controllers I'm not sure they really do it.
On Areca Controllers, I can select if I want it on or off. Could an
information about the disk cache be written to the FAQ? Would for sure
save some people's data... :-)
So battery backed controller cache on, disk cache off, barriers off.
Quite simple once you know it :-))
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-03 10:40 ` Michael Monnerie
@ 2009-02-03 15:49 ` Christoph Hellwig
2009-02-04 8:52 ` Michael Monnerie
0 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2009-02-03 15:49 UTC (permalink / raw)
To: Michael Monnerie; +Cc: xfs
On Tue, Feb 03, 2009 at 11:40:08AM +0100, Michael Monnerie wrote:
> On Dienstag 03 Februar 2009 Christoph Hellwig wrote:
> > Yes. ?I would hope raid controllers disable the write cache on disks,
> > but for lower end controllers I'm not sure they really do it.
>
> On Areca Controllers, I can select if I want it on or off. Could an
> information about the disk cache be written to the FAQ? Would for sure
> save some people's data... :-)
>
> So battery backed controller cache on, disk cache off, barriers off.
> Quite simple once you know it :-))
Yeah, that sounds correct. Do you volunteer for the FAQ entry? xfs.org
is a wiki so you could add it. I'm happy to proof-read it if you want.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-03 15:49 ` Christoph Hellwig
@ 2009-02-04 8:52 ` Michael Monnerie
2009-02-04 10:27 ` Michael Monnerie
2009-02-04 12:22 ` Dave Chinner
0 siblings, 2 replies; 28+ messages in thread
From: Michael Monnerie @ 2009-02-04 8:52 UTC (permalink / raw)
To: xfs
On Dienstag 03 Februar 2009 Christoph Hellwig wrote:
> Yeah, that sounds correct. Do you volunteer for the FAQ entry?
> xfs.org is a wiki so you could add it. I'm happy to proof-read it
> if you want.
I don't know if it's good and correct, I just put this in the wiki, and
additionally changed 2 sections, please check the wiki log if it's
correct:
== Q. What about the hard disk write cache? ==
The problem with hard disk write caches is that their contents are lost
in case of a power outage. With hard disk cache sizes of currently up to
32MB that can be a lot of valuable information.
With a single hard disk and barriers turned on (on=default), a powerfail
"only" looses data in the cache but at least does not destroy the
filesystem.
With a RAID controller with battery backed cache, you should turn off
barriers, as recommended above. But then you *must* disable the hard
disk write cache in order to ensure to keep the filesystem intact after
a power failure.
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 8:52 ` Michael Monnerie
@ 2009-02-04 10:27 ` Michael Monnerie
2009-02-04 12:26 ` Dave Chinner
2009-02-04 12:22 ` Dave Chinner
1 sibling, 1 reply; 28+ messages in thread
From: Michael Monnerie @ 2009-02-04 10:27 UTC (permalink / raw)
To: xfs
On Mittwoch 04 Februar 2009 Michael Monnerie wrote:
> == Q. What about the hard disk write cache? ==
What just comes to my mind: what about XEN/VMware?
What settings should be used within a virtual machine? Even if I have
battery backed cache and nobarrier on the host, the VM itself could
crash, or the whole host freeze. Is "nobarrier" save within a VM?
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 8:52 ` Michael Monnerie
2009-02-04 10:27 ` Michael Monnerie
@ 2009-02-04 12:22 ` Dave Chinner
2009-02-04 12:45 ` Emmanuel Florac
` (2 more replies)
1 sibling, 3 replies; 28+ messages in thread
From: Dave Chinner @ 2009-02-04 12:22 UTC (permalink / raw)
To: Michael Monnerie; +Cc: xfs
On Wed, Feb 04, 2009 at 09:52:45AM +0100, Michael Monnerie wrote:
> On Dienstag 03 Februar 2009 Christoph Hellwig wrote:
> > Yeah, that sounds correct. Do you volunteer for the FAQ entry?
> > xfs.org is a wiki so you could add it. I'm happy to proof-read it
> > if you want.
>
> I don't know if it's good and correct, I just put this in the wiki, and
> additionally changed 2 sections, please check the wiki log if it's
> correct:
>
> == Q. What about the hard disk write cache? ==
>
> The problem with hard disk write caches is that their contents are lost
> in case of a power outage. With hard disk cache sizes of currently up to
> 32MB that can be a lot of valuable information.
>
> With a single hard disk and barriers turned on (on=default), a powerfail
> "only" looses data in the cache but at least does not destroy the
> filesystem.
I'd drop this paragraph - powerfail can destroy filesystems even on
a single disk (e.g. root directory gets corrupted).
> With a RAID controller with battery backed cache, you should turn off
> barriers, as recommended above. But then you *must* disable the hard
> disk write cache in order to ensure to keep the filesystem intact after
> a power failure.
I'd change this to say "*must* disable the individual hard disk
write caches" to make it clear that it is referencing the disks
behind the raid controller. I'd also say "The method for doing this
is different for each RAID controller. Please consult your RAID
controller documentation to determine how to change these settings."
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 10:27 ` Michael Monnerie
@ 2009-02-04 12:26 ` Dave Chinner
2009-02-04 15:03 ` Michael Monnerie
0 siblings, 1 reply; 28+ messages in thread
From: Dave Chinner @ 2009-02-04 12:26 UTC (permalink / raw)
To: Michael Monnerie; +Cc: xfs
On Wed, Feb 04, 2009 at 11:27:46AM +0100, Michael Monnerie wrote:
> On Mittwoch 04 Februar 2009 Michael Monnerie wrote:
> > == Q. What about the hard disk write cache? ==
>
> What just comes to my mind: what about XEN/VMware?
>
> What settings should be used within a virtual machine? Even if I have
> battery backed cache and nobarrier on the host, the VM itself could
> crash, or the whole host freeze. Is "nobarrier" save within a VM?
Depends on the implementation of the hypervisor.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 12:22 ` Dave Chinner
@ 2009-02-04 12:45 ` Emmanuel Florac
2009-02-04 14:01 ` KELEMEN Peter
2009-02-04 15:24 ` Michael Monnerie
2009-02-04 15:33 ` Ralf Liebenow
2 siblings, 1 reply; 28+ messages in thread
From: Emmanuel Florac @ 2009-02-04 12:45 UTC (permalink / raw)
To: Dave Chinner; +Cc: Michael Monnerie, xfs
Le Wed, 4 Feb 2009 23:22:41 +1100
Dave Chinner <david@fromorbit.com> écrivait:
> Please consult your RAID
> controller documentation to determine how to change these settings."
I have some controllers at hand, and I had a quick glance :
- Adaptec : allows setting individual drives cache
arcconf setcache <disk> wb|wt
<disk
- 3ware : no information
- Xyratex : no information
- Areca : Allows setting individual cache for passthru disks, needs
actual testing for drives part of an array.
--
----------------------------------------
Emmanuel Florac | Intellique
----------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 12:45 ` Emmanuel Florac
@ 2009-02-04 14:01 ` KELEMEN Peter
2009-02-04 15:15 ` Emmanuel Florac
0 siblings, 1 reply; 28+ messages in thread
From: KELEMEN Peter @ 2009-02-04 14:01 UTC (permalink / raw)
To: Emmanuel Florac; +Cc: Michael Monnerie, xfs
* Emmanuel Florac (eflorac@intellique.com) [20090204 13:45]:
> - 3ware : no information
/cX/uX set cache=off
http://www.3ware.com/support/UserDocs/CLIGuide-9.5.1.1.pdf , page 86
HTH,
Peter
--
.+'''+. .+'''+. .+'''+. .+'''+. .+''
Kelemen Péter / \ / \ Peter.Kelemen@cern.ch
.+' `+...+' `+...+' `+...+' `+...+'
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 12:26 ` Dave Chinner
@ 2009-02-04 15:03 ` Michael Monnerie
2009-02-13 10:12 ` Michael Monnerie
0 siblings, 1 reply; 28+ messages in thread
From: Michael Monnerie @ 2009-02-04 15:03 UTC (permalink / raw)
To: xfs
[-- Attachment #1.1: Type: text/plain, Size: 986 bytes --]
On Mittwoch 04 Februar 2009 Dave Chinner wrote:
> > What just comes to my mind: what about XEN/VMware?
> >
> > What settings should be used within a virtual machine? Even if I
> > have battery backed cache and nobarrier on the host, the VM itself
> > could crash, or the whole host freeze. Is "nobarrier" save within a
> > VM?
>
> Depends on the implementation of the hypervisor.
OK, so we don't know?
I guess VMware will be the most used for Linux systems, and XEN usage
will soon grow a lot as it's directly in the kernel now. Does anybody
know for those two, whether "nobarrier" is save/needed/a bad thing?
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 14:01 ` KELEMEN Peter
@ 2009-02-04 15:15 ` Emmanuel Florac
2009-02-04 15:25 ` Michael Monnerie
2009-02-04 15:41 ` KELEMEN Peter
0 siblings, 2 replies; 28+ messages in thread
From: Emmanuel Florac @ 2009-02-04 15:15 UTC (permalink / raw)
To: KELEMEN Peter; +Cc: Michael Monnerie, xfs
Le Wed, 4 Feb 2009 15:01:13 +0100
KELEMEN Peter <Peter.Kelemen@cern.ch> écrivait:
> > - 3ware : no information
>
> /cX/uX set cache=off
Yes but it set cache for the array globally, I don't find anything about
the individual disks write cache specifically. Same thing for Xyratex.
BTW I checked LSI MegaRAID and it allows setting individual disks
cache too :
MegaCli -AdpCacheFlush -aN|-a0,1,2|-aALL -EnDskCache|DisDskCache
So for now we have the following :
disk cache settings control
3Ware : no
Xyratex : no
Adaptec : yes
LSI : yes
Areca : possibly...
--
----------------------------------------
Emmanuel Florac | Intellique
----------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 12:22 ` Dave Chinner
2009-02-04 12:45 ` Emmanuel Florac
@ 2009-02-04 15:24 ` Michael Monnerie
2009-02-05 8:37 ` Dave Chinner
2009-02-04 15:33 ` Ralf Liebenow
2 siblings, 1 reply; 28+ messages in thread
From: Michael Monnerie @ 2009-02-04 15:24 UTC (permalink / raw)
To: xfs
[-- Attachment #1.1: Type: text/plain, Size: 2293 bytes --]
(compressing 2 answers here)
On Mittwoch 04 Februar 2009 Dave Chinner wrote:
> > With a single hard disk and barriers turned on (on=default), a
> > powerfail "only" looses data in the cache but at least does not
> > destroy the filesystem.
>
> I'd drop this paragraph - powerfail can destroy filesystems even on
> a single disk (e.g. root directory gets corrupted).
Isn't that what barriers are for? If I understand correctly, barriers
help against destroying the filesys, except root dir? But that should
"easily" be fixable with xfs_repair or so?
I'd like to have a paragraph for normal XFS users, a PC with harddisks,
maybe with onboard RAID1 or 10. So if I could let the paragraph, that
should be OK (as I hope the root dir destroy is a very, very seldom
case).
> > With a RAID controller with battery backed cache, you should turn
> > off barriers, as recommended above. But then you *must* disable the
> > hard disk write cache in order to ensure to keep the filesystem
> > intact after a power failure.
>
> I'd change this to say "*must* disable the individual hard disk
> write caches" to make it clear that it is referencing the disks
> behind the raid controller. I'd also say "The method for doing this
> is different for each RAID controller. Please consult your RAID
> controller documentation to determine how to change these settings."
That sounds good and I'll put it in.
On Mittwoch 04 Februar 2009 Emmanuel Florac wrote:
> I have some controllers at hand, and I had a quick glance :
> - Areca : Allows setting individual cache for passthru disks, needs
> actual testing for drives part of an array.
Areca allows "Disk Write Cache Mode" on/off under "System Controls" ->
"System Config" in the archttpd web interface, plus per Volume write
back cache on/off, but that's not relevant when using battery (and those
who don't - don't care anyway about their data).
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 15:15 ` Emmanuel Florac
@ 2009-02-04 15:25 ` Michael Monnerie
2009-02-04 15:41 ` KELEMEN Peter
1 sibling, 0 replies; 28+ messages in thread
From: Michael Monnerie @ 2009-02-04 15:25 UTC (permalink / raw)
To: xfs
[-- Attachment #1.1: Type: text/plain, Size: 615 bytes --]
On Mittwoch 04 Februar 2009 Emmanuel Florac wrote:
[This talk is about which controllers allow individual disk write cache
to be turned off]
> 3Ware : no
> Xyratex : no
> Adaptec : yes
> LSI : yes
> Areca : possibly...
correcting:
Areca: yes
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 12:22 ` Dave Chinner
2009-02-04 12:45 ` Emmanuel Florac
2009-02-04 15:24 ` Michael Monnerie
@ 2009-02-04 15:33 ` Ralf Liebenow
2009-02-04 16:18 ` Michael Monnerie
2 siblings, 1 reply; 28+ messages in thread
From: Ralf Liebenow @ 2009-02-04 15:33 UTC (permalink / raw)
To: xfs
Hello !
Maybe this is a stupid question:
Should Battery backed RAID controllers not always set their discs cache off ?
As I see it (in case of a power failure):
- the discs are connectet to the main power, so if there is a power
failure they're offline at that moment in time and their (write) cache will
be gone in that instance of time too
- if they are connected to a battery backed RAID cache, I assume that
this cache will be written as soon as the system is online again
(if the battery lasts that long)
- if a RAID controller does not turn off the disks write cache, the controller
cannot know if previous writes have made it to the disk. A good
RAID Controller would also use its cache to re-organise the disc
writes to minimize seek times doing somthing like intelligent
command queuing. This would also mean, that any order of writes
to a disk could have been changed by the controller. This would
ultimately break any filesystem which does not explicitly fsyncing
consistent checkpoints to the disk, which would make battery backed
RAID Systems pretty useless ... would it ?
So .. a battery backed RAID controller should default to "no disk write cache"
should it ? Otherwise why should anyone want to use such expensive
controllers ... it just does not make sense to have a battery backed
cache on the controller, when things get inconsistent at a power
outage ... It wouldn't have any purpuse ... I hope developers of
battery backed RAID controllers are aware of that implication ...
Greets
Ralf
> On Wed, Feb 04, 2009 at 09:52:45AM +0100, Michael Monnerie wrote:
> > On Dienstag 03 Februar 2009 Christoph Hellwig wrote:
> > > Yeah, that sounds correct. Do you volunteer for the FAQ entry?
> > > xfs.org is a wiki so you could add it. I'm happy to proof-read it
> > > if you want.
> >
> > I don't know if it's good and correct, I just put this in the wiki, and
> > additionally changed 2 sections, please check the wiki log if it's
> > correct:
> >
> > == Q. What about the hard disk write cache? ==
> >
> > The problem with hard disk write caches is that their contents are lost
> > in case of a power outage. With hard disk cache sizes of currently up to
> > 32MB that can be a lot of valuable information.
> >
> > With a single hard disk and barriers turned on (on=default), a powerfail
> > "only" looses data in the cache but at least does not destroy the
> > filesystem.
>
> I'd drop this paragraph - powerfail can destroy filesystems even on
> a single disk (e.g. root directory gets corrupted).
>
> > With a RAID controller with battery backed cache, you should turn off
> > barriers, as recommended above. But then you *must* disable the hard
> > disk write cache in order to ensure to keep the filesystem intact after
> > a power failure.
>
> I'd change this to say "*must* disable the individual hard disk
> write caches" to make it clear that it is referencing the disks
> behind the raid controller. I'd also say "The method for doing this
> is different for each RAID controller. Please consult your RAID
> controller documentation to determine how to change these settings."
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>
--
theCode AG
HRB 78053, Amtsgericht Charlottenbg
USt-IdNr.: DE204114808
Vorstand: Ralf Liebenow, Michael Oesterreich, Peter Witzel
Aufsichtsratsvorsitzender: Wolf von Jaduczynski
Oranienstr. 10-11, 10997 Berlin [×]
fon +49 30 617 897-0 fax -10
ralf@theCo.de http://www.theCo.de
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 15:15 ` Emmanuel Florac
2009-02-04 15:25 ` Michael Monnerie
@ 2009-02-04 15:41 ` KELEMEN Peter
2009-02-04 16:01 ` Michael Monnerie
2009-02-04 16:23 ` Emmanuel Florac
1 sibling, 2 replies; 28+ messages in thread
From: KELEMEN Peter @ 2009-02-04 15:41 UTC (permalink / raw)
To: Emmanuel Florac; +Cc: Michael Monnerie, xfs
* Emmanuel Florac (eflorac@intellique.com) [20090204 16:15]:
> Yes but it set cache for the array globally, I don't find
> anything about the individual disks write cache specifically.
> Same thing for Xyratex.
"Write cache includes the disk drive cache and controller cache."
I assume this means you can only set the drive caches and the unit
caches together.
Peter
--
.+'''+. .+'''+. .+'''+. .+'''+. .+''
Kelemen Péter / \ / \ Peter.Kelemen@cern.ch
.+' `+...+' `+...+' `+...+' `+...+'
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 15:41 ` KELEMEN Peter
@ 2009-02-04 16:01 ` Michael Monnerie
2009-02-04 16:23 ` Emmanuel Florac
1 sibling, 0 replies; 28+ messages in thread
From: Michael Monnerie @ 2009-02-04 16:01 UTC (permalink / raw)
To: xfs
[-- Attachment #1.1: Type: text/plain, Size: 626 bytes --]
On Mittwoch 04 Februar 2009 KELEMEN Peter wrote:
> I assume this means you can only set the drive caches and the unit
> caches together.
Should I write an overview as we had it here on the list into the wiki?
Could be a quick guide and therefore good I guess.
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 15:33 ` Ralf Liebenow
@ 2009-02-04 16:18 ` Michael Monnerie
2009-02-05 8:22 ` Michael Monnerie
0 siblings, 1 reply; 28+ messages in thread
From: Michael Monnerie @ 2009-02-04 16:18 UTC (permalink / raw)
To: ralf, xfs
[-- Attachment #1.1: Type: text/plain, Size: 2739 bytes --]
On Mittwoch 04 Februar 2009 Ralf Liebenow wrote:
> Should Battery backed RAID controllers not always set their discs
> cache off ?
>
> As I see it (in case of a power failure):
> - the discs are connectet to the main power, so if there is a power
> failure they're offline at that moment in time and their (write)
> cache will be gone in that instance of time too
Normally a server is on a UPS, and that should report when there's a
power outage so the server has enough time to gracefully shut down.
Still, there can be other events such as:
- power supply error. Even with redundant PS, an outage can exist
- human error (coffee into the server, someone unplugging the cable
between UPS and server,...)
- and of course mainboard/cpu/ram total crashes
so you are basically never safe.
> - if a RAID controller does not turn off the disks write cache, the
> controller cannot know if previous writes have made it to the disk.
The controller could keep in-transfer blocks in it's cache, waiting for
a confirm from the disk that the blocks are on the media, and only
afterwards remove it from cache. I don't know if controllers do that
actually. I'll ask Areca support on that.
> good RAID Controller would also use its cache to re-organise the disc
> writes to minimize seek times doing somthing like intelligent command
> queuing. This would also mean, that any order of writes to a disk
> could have been changed by the controller. This would ultimately
> break any filesystem which does not explicitly fsyncing consistent
> checkpoints to the disk, which would make battery backed RAID Systems
> pretty useless ... would it ?
>
> So .. a battery backed RAID controller should default to "no disk
> write cache" should it ? Otherwise why should anyone want to use such
> expensive controllers ... it just does not make sense to have a
> battery backed cache on the controller, when things get inconsistent
> at a power outage ... It wouldn't have any purpuse ... I hope
> developers of battery backed RAID controllers are aware of that
> implication ...
Yes, imagine you have a RAID with 8 hard disks each having 32MB cache...
up to 256MB data lost, with a very big chance of having filesystem
metadata in cache, as that's written very often...
I'll be back on that once I have an official answer from Areca.
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 15:41 ` KELEMEN Peter
2009-02-04 16:01 ` Michael Monnerie
@ 2009-02-04 16:23 ` Emmanuel Florac
1 sibling, 0 replies; 28+ messages in thread
From: Emmanuel Florac @ 2009-02-04 16:23 UTC (permalink / raw)
To: KELEMEN Peter; +Cc: Michael Monnerie, xfs
Le Wed, 4 Feb 2009 16:41:53 +0100
KELEMEN Peter <Peter.Kelemen@cern.ch> écrivait:
> "Write cache includes the disk drive cache and controller cache."
>
> I assume this means you can only set the drive caches and the unit
> caches together.
Oh, you're right, I've missed that. I think we'll plan some testing
pulling out power cables under heavy write load pretty soon :)
--
----------------------------------------
Emmanuel Florac | Intellique
----------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 16:18 ` Michael Monnerie
@ 2009-02-05 8:22 ` Michael Monnerie
2009-02-05 12:05 ` Emmanuel Florac
0 siblings, 1 reply; 28+ messages in thread
From: Michael Monnerie @ 2009-02-05 8:22 UTC (permalink / raw)
To: xfs
[-- Attachment #1.1: Type: text/plain, Size: 2170 bytes --]
On Mittwoch 04 Februar 2009 Michael Monnerie wrote:
> > - if a RAID controller does not turn off the disks write cache,
> > the controller cannot know if previous writes have made it to the
> > disk.
>
> The controller could keep in-transfer blocks in it's cache, waiting
> for a confirm from the disk that the blocks are on the media, and
> only afterwards remove it from cache. I don't know if controllers do
> that actually. I'll ask Areca support on that.
I have an answer from Areca support:
*******************************************************
as soon as the hard drive firmware response command completed, the data
will be remove from controller cache. so controller will not known the
data had been trully write into disks or remain in hard drive cache
only.
by controller default setting, if controller have battery module
connected, it will automatically disable the hard drive cache for best
data protection. as you known, controller can't protect the data remain
in hard drive cache while power outage occured.
but this setting is configure-able, some customer may forece enable the
hard drive cacne for better performance. beucase hard drive without
cache enabled have quite poor performance.
*******************************************************
So I'd say they have a very sensible default:
If you use a BBM (battery backup module) then disk write caches will be
off, because you care about your data.
If you dont use a BBM anyway, they let disk write cache on because your
data is not save at all, so why care? 8-) And as most magazines will
test without a BBM, it improves speed up to the max, which is good for
benchmarks :-)
I'll put a section with RAID controllers into the wiki, if someone has
objections we can remove it again.
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 15:24 ` Michael Monnerie
@ 2009-02-05 8:37 ` Dave Chinner
0 siblings, 0 replies; 28+ messages in thread
From: Dave Chinner @ 2009-02-05 8:37 UTC (permalink / raw)
To: Michael Monnerie; +Cc: xfs
On Wed, Feb 04, 2009 at 04:24:27PM +0100, Michael Monnerie wrote:
> (compressing 2 answers here)
>
> On Mittwoch 04 Februar 2009 Dave Chinner wrote:
> > > With a single hard disk and barriers turned on (on=default), a
> > > powerfail "only" looses data in the cache but at least does not
> > > destroy the filesystem.
> >
> > I'd drop this paragraph - powerfail can destroy filesystems even on
> > a single disk (e.g. root directory gets corrupted).
>
> Isn't that what barriers are for? If I understand correctly, barriers
> help against destroying the filesys, except root dir? But that should
> "easily" be fixable with xfs_repair or so?
See, I didn't understand what you were trying to say. ;)
What I missed was the "barriers turned on" - I was referring
(context not quoted) to the fact that RAID5 is not unіque in it's
ability to trash the filesystem on powerfail. You are right,
barriers on a single disk should prevent filesystem corruption
and will prevent loss of synchronously written data. only
asynchronously written data will get lost (just like all the
stuff sitting in RAM).
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-05 8:22 ` Michael Monnerie
@ 2009-02-05 12:05 ` Emmanuel Florac
0 siblings, 0 replies; 28+ messages in thread
From: Emmanuel Florac @ 2009-02-05 12:05 UTC (permalink / raw)
To: Michael Monnerie; +Cc: xfs
Le Thu, 5 Feb 2009 09:22:09 +0100
Michael Monnerie <michael.monnerie@is.it-management.at> écrivait:
> I have an answer from Areca support:
Excellent. I'll ask 3Ware, Adaptec and Xyratex on this particular
point.
--
----------------------------------------
Emmanuel Florac | Intellique
----------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-01-31 10:57 ` Christoph Hellwig
2009-02-03 1:22 ` Michael Monnerie
@ 2009-02-06 15:57 ` Steffen Knauf
1 sibling, 0 replies; 28+ messages in thread
From: Steffen Knauf @ 2009-02-06 15:57 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: xfs
Hello,
sorry for the delay. I don't know whether it is interesting, but after a
xfs_repair, the filesystem could completely rebuild.
Thanks Chritoph. I'm a little bit confused about "write back cache" and
the "barrier" option.
On the RAID Controller "Write Cache" is enabled, "Write Cache Periodic
Flush = 5 seconds" and "Write Cache Flush Ratio = 45 Percent".
My kernelversion is 2.6.16 (SLES10), so the default should be nobarrier.
But i read in the official SGI xfs Training documentation that write
Barrier are enabled by default on SLES10.
How can i check if barrier is on or off?. I don't find something in the log.
greets
Steffen
> On Fri, Jan 30, 2009 at 10:53:19PM +0100, Steffen Knauf wrote:
>
>> Hello,
>>
>> after a raid crash (Raid Controller problem, 3 Disks of the Disk Group
>> were kicked out oft the diskgroup), 2 of 3 partitions (XFS FS) were
>> shutdown immediately.
>> Perhaps somebody has a idea, what's the best solution (xfs_repair?).
>>
>
> This looks like you were running with a write back cache enabled on the
> controller / disks but without barriers. xfs_repair should be able
> to repair the filesystem. If you're lucky only the freespace-btrees
> are corrupted (as in the trace below) as xfs_repair can rebuild them
> from scratch.
>
>
>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: xfs_force_shutdown after Raid crash
2009-02-04 15:03 ` Michael Monnerie
@ 2009-02-13 10:12 ` Michael Monnerie
0 siblings, 0 replies; 28+ messages in thread
From: Michael Monnerie @ 2009-02-13 10:12 UTC (permalink / raw)
To: xfs
On Mittwoch 04 Februar 2009 Michael Monnerie wrote:
> On Mittwoch 04 Februar 2009 Dave Chinner wrote:
> > > What just comes to my mind: what about XEN/VMware?
> > >
> > > What settings should be used within a virtual machine? Even if I
> > > have battery backed cache and nobarrier on the host, the VM
> > > itself could crash, or the whole host freeze. Is "nobarrier" save
> > > within a VM?
> >
> > Depends on the implementation of the hypervisor.
>
> OK, so we don't know?
> I guess VMware will be the most used for Linux systems, and XEN usage
> will soon grow a lot as it's directly in the kernel now. Does anybody
> know for those two, whether "nobarrier" is save/needed/a bad thing?
Does anybody know about XEN/VMware? It would be interesting, maybe worth
a FAQ entry if we have a good answer.
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2009-02-13 10:13 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-01-30 21:53 xfs_force_shutdown after Raid crash Steffen Knauf
2009-01-31 10:57 ` Christoph Hellwig
2009-02-03 1:22 ` Michael Monnerie
2009-02-03 3:13 ` Eric Sandeen
2009-02-03 9:22 ` Michael Monnerie
2009-02-03 9:32 ` Christoph Hellwig
2009-02-03 10:40 ` Michael Monnerie
2009-02-03 15:49 ` Christoph Hellwig
2009-02-04 8:52 ` Michael Monnerie
2009-02-04 10:27 ` Michael Monnerie
2009-02-04 12:26 ` Dave Chinner
2009-02-04 15:03 ` Michael Monnerie
2009-02-13 10:12 ` Michael Monnerie
2009-02-04 12:22 ` Dave Chinner
2009-02-04 12:45 ` Emmanuel Florac
2009-02-04 14:01 ` KELEMEN Peter
2009-02-04 15:15 ` Emmanuel Florac
2009-02-04 15:25 ` Michael Monnerie
2009-02-04 15:41 ` KELEMEN Peter
2009-02-04 16:01 ` Michael Monnerie
2009-02-04 16:23 ` Emmanuel Florac
2009-02-04 15:24 ` Michael Monnerie
2009-02-05 8:37 ` Dave Chinner
2009-02-04 15:33 ` Ralf Liebenow
2009-02-04 16:18 ` Michael Monnerie
2009-02-05 8:22 ` Michael Monnerie
2009-02-05 12:05 ` Emmanuel Florac
2009-02-06 15:57 ` Steffen Knauf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox