* [linux-lvm] Incorrect metadata area header checksum
@ 2003-06-02 9:34 jeff
2003-06-02 10:23 ` Alasdair G Kergon
0 siblings, 1 reply; 14+ messages in thread
From: jeff @ 2003-06-02 9:34 UTC (permalink / raw)
To: linux-lvm
hi-
what does this error mean, and how do i rectify it? i have existing lvm
volumes on this box, but creating this big one (~350gb) on my new
software-raid5 array is giving me grief. i can mke2fs /dev/md2 just
fine, but when i try to install lvm2..:
root@cerulean:~# pvscan
PV /dev/md1 VG raid1 lvm1 [15.65 GB / 0 free]
Total: 1 [0 ] / in use: 1 [0 ] / in no VG: 0 [0 ]
root@cerulean:~# pvcreate /dev/md2
Failed to read label on physical volume /dev/md2
Physical volume "/dev/md2" successfully created
root@cerulean:~# vgcreate raid5 /dev/md2
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Volume group "raid5" successfully created
root@cerulean:~# vgdisplay raid5
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Volume group "raid5" doesn't exist
more info:
root@cerulean:~# uname -a
Linux cerulean 2.4.21-rc2-ac2 #1 Sun May 18 12:08:13 PDT 2003 i686 GNU/Linux
root@cerulean:~# egrep 'DEV_DM|DEV_LVM' /boot/config-2.4.21-rc2-ac2
# CONFIG_BLK_DEV_LVM is not set
CONFIG_BLK_DEV_DM=y
root@cerulean:~# dpkg -s lvm2 | grep Version
Version: 1.95.15-1
root@cerulean:~# dpkg -s raidtools2 | grep Version
Version: 1.00.3-2
root@cerulean:~# cat /proc/mdstat
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid1 sdb1[1] sda1[0]
976768 blocks [2/2] [UU]
md1 : active raid1 sdb3[1] sda3[0]
16418752 blocks [2/2] [UU]
md2 : active raid5 sdd1[3] sdc1[2] hdc1[1] hda1[0]
351654528 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
thanks,
jeff
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] Incorrect metadata area header checksum
2003-06-02 9:34 jeff
@ 2003-06-02 10:23 ` Alasdair G Kergon
0 siblings, 0 replies; 14+ messages in thread
From: Alasdair G Kergon @ 2003-06-02 10:23 UTC (permalink / raw)
To: linux-lvm
On Mon, Jun 02, 2003 at 07:35:00AM -0700, jeff wrote:
> volumes on this box, but creating this big one (~350gb) on my new
> software-raid5 array is giving me grief. i can mke2fs /dev/md2 just
> fine, but when i try to install lvm2..:
Two things to try:
1) Set up filters in lvm.conf to make sure LVM only looks at /dev/md and
not the underlying devices directly.
2) Use a bigger Physical Extent size with vgcreate.
If that still doesn't work, run with the verbose flags (-vvv) or set the
debug log-to-file options in lvm.conf to get more diagnostic information.
Alasdair
--
agk@uk.sistina.com
^ permalink raw reply [flat|nested] 14+ messages in thread
* [linux-lvm] incorrect metadata area header checksum
@ 2005-05-12 0:44 Geoff
0 siblings, 0 replies; 14+ messages in thread
From: Geoff @ 2005-05-12 0:44 UTC (permalink / raw)
To: linux-lvm
[-- Attachment #1: Type: text/plain, Size: 1271 bytes --]
Hi,
OK, here goes the story, hope someone can help (disclaimer, I am relatively
new to linux and VERY new to LVM).
Was using linux kernel 2.4.30 with LVM2 - initialised PV, created VG no
problem. Couldnt create LV due to device mapper. Turns out that for some
reason it didnt compile into the kernel correctly. Tried recompiling a few
times, still no go.
Thought, blow it, try 2.6 so compiled and booted off 2.6.11.8.
Everything seemed ok, except I couldnt create the LV due to a read/write
error. Search archives, no luck.
Decided to remove VG and recreate it.
Removed VG got read/write errors.
Now, I cant do anything getting "incorrect metadata area header checksum"
messages.
pvscan says:
"incorrect metadata area header checksum"
"incorrect metadata area header checksum"
"incorrect metadata area header checksum"
"incorrect metadata area header checksum"
PV /dev/sda7 lvm2 [198.35 GB]
Total: 1 [198.35 GB] / in use: 0 [0 ] /in no VG: 1 [198.35 GB]
So, I have 2 problems:
1. resolving this error and initialising the PV and creating the VG again,
2. Creating the LV.
help! :P
thanks
Geoff
Webmaster
Maxnet Systems Administration
PH 0508 MAXNET
Mobile: 021 919 220
Fax +64-9-3007227
www.maxnet.co.nz <http://www.maxnet.co.nz/>
[-- Attachment #2: Type: text/html, Size: 4213 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* [linux-lvm] Incorrect metadata area header checksum
@ 2005-10-18 12:24 Eric S. Johansson
0 siblings, 0 replies; 14+ messages in thread
From: Eric S. Johansson @ 2005-10-18 12:24 UTC (permalink / raw)
To: LVM general discussion and development
I could really use some advice here. Is this something to worry about
something I can fix?
pvdisplay
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Incorrect metadata area header checksum
--- Physical volume ---
PV Name /dev/md0
VG Name raid_vg
PV Size 232.88 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 59618
Free PE 1960
Allocated PE 57658
PV UUID 5o1utA-eN0O-yJqx-4N0Q-NrjT-EpRn-dxyAyT
--- NEW Physical volume ---
PV Name /dev/sda1
VG Name
PV Size 232.88 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID Q2UgM6-QG6D-Lrdi-KnAT-Uw7w-6j75-kAO0l3
Incorrect metadata area header checksum
^ permalink raw reply [flat|nested] 14+ messages in thread
* [linux-lvm] Incorrect metadata area header checksum
@ 2006-03-11 2:43 Bernard Fay
0 siblings, 0 replies; 14+ messages in thread
From: Bernard Fay @ 2006-03-11 2:43 UTC (permalink / raw)
To: LVM general discussion and development
For any LVM commands I run I receive the following message:
Incorrect metadata area header checksum
and/or
Volume group mapper doesn't exist
Someone knows why and how it could be fixed? I run Debian etch with
kernel 2.6.12-1-386 and lvm2 version 2.01.04-5.
TIA,
Bernard
^ permalink raw reply [flat|nested] 14+ messages in thread
* [linux-lvm] Incorrect metadata area header checksum
@ 2006-11-02 13:38 C'est Pierre
2006-11-02 22:02 ` Luca Berra
0 siblings, 1 reply; 14+ messages in thread
From: C'est Pierre @ 2006-11-02 13:38 UTC (permalink / raw)
To: LVM general discussion and development
Hello,
I am running through this error in CentOS 4.3, has anyone gone through
this situation?
I added a disk to the VG, initialized it with pvcreate and vgextended
our VG. However, it's constantly showing these messages:
# vgdisplay
Incorrect metadata area header checksum
/var/lock/lvm/V_VG00: open failed: No space left on device
Can't lock VG00: skipping
which don't seem to be true at all:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VG00-LogVol00
9.0G 7.0G 1.6G 83% /
/dev/sda1 99M 9.0M 85M 10% /boot
/dev/mapper/VG00-LogVol01
14G 510M 13G 4% /opt/lampp/var
/dev/mapper/VG00-LogVol02
977M 18M 910M 2% /tmp
Not of the disks is fully used, yet I can't even write to /root
# >foobar
-bash: foobar: No space left on device
Any idea?
Thanks
Pierre
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] Incorrect metadata area header checksum
2006-11-02 13:38 C'est Pierre
@ 2006-11-02 22:02 ` Luca Berra
0 siblings, 0 replies; 14+ messages in thread
From: Luca Berra @ 2006-11-02 22:02 UTC (permalink / raw)
To: linux-lvm
On Thu, Nov 02, 2006 at 01:38:22PM +0000, C'est Pierre wrote:
>Not of the disks is fully used, yet I can't even write to /root
>
># >foobar
>-bash: foobar: No space left on device
>
maybe you are out of inodes?
try checking with df -i
L.
--
Luca Berra -- bluca@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 14+ messages in thread
* [linux-lvm] Incorrect metadata area header checksum
@ 2008-02-03 2:18 Eckhard Kosin
2008-02-03 4:15 ` David Robinson
0 siblings, 1 reply; 14+ messages in thread
From: Eckhard Kosin @ 2008-02-03 2:18 UTC (permalink / raw)
To: linux-lvm
Hi all,
I just reordered my disk space: shrunk /dev/hda6, created a new
partition /dev/hda10 with this new space and added /dev/hda10 to the
existing volume group vg_uhu00. After that I could boot and all seems
to be fine. but running vgscan I get
root@uhu:/home/ecki/backup/uhu# vgscan
Reading all physical volumes. This may take a while...
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Couldn't find device with uuid 'Cm961g-X5km-ifdU-r0Cp-ACmd-N9qG-XcieTR'.
Couldn't find all physical volumes for volume group vg_uhu00.
Incorrect metadata area header checksum
Couldn't find device with uuid 'Cm961g-X5km-ifdU-r0Cp-ACmd-N9qG-XcieTR'.
Couldn't find all physical volumes for volume group vg_uhu00.
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Couldn't find device with uuid 'Cm961g-X5km-ifdU-r0Cp-ACmd-N9qG-XcieTR'.
Couldn't find all physical volumes for volume group vg_uhu00.
Incorrect metadata area header checksum
Couldn't find device with uuid 'Cm961g-X5km-ifdU-r0Cp-ACmd-N9qG-XcieTR'.
Couldn't find all physical volumes for volume group vg_uhu00.
Volume group "vg_uhu00" not found
Some additional information:
I'm running Ubuntu 6.06 (dapper)
root@uhu:/home/ecki/backup/uhu# uname -a
Linux uhu 2.6.15-51-386 #1 PREEMPT Thu Dec 6 20:20:49 UTC 2007 i686 GNU/Linux
root@uhu:/home/ecki/backup/uhu# apt-show-versions lvm2
lvm2/dapper uptodate 2.02.02-1ubuntu1.5
root@uhu:/home/ecki/backup/uhu# fdisk -ul
Disk /dev/hda: 40.0 GB, 40007761920 bytes
255 heads, 63 sectors/track, 4864 cylinders, total 78140160 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 63 14329979 7164958+ 7 HPFS/NTFS
/dev/hda2 14329980 78140159 31905090 f W95 Ext'd (LBA)
/dev/hda5 23438898 25993169 1277136 82 Linux swap / Solaris
/dev/hda6 25993233 38443544 6225156 83 Linux
/dev/hda7 42556248 42684704 64228+ 83 Linux
/dev/hda8 42684768 78140159 17727696 8e Linux LVM
/dev/hda9 14330106 23438834 4554364+ 83 Linux
/dev/hda10 38443608 42556184 2056288+ 83 Linux
Partition table entries are not in disk order
The volume group vg_uhu00 uses the partitions /dev/hda8, /dev/hda9 and
/dev/hda10 and, I believe, but I'm not sure, /dev/hda5 (swap
partition). I can't check, because lvdisplay gives the same output as vgscan above.
I have a line
filter = [ "r|/dev/cdrom|", "r|/dev/hda[1,6,7]|" ]
in my /etc/lvm/lvm.conf
Any suggestions, how to get a clean volume group?
Thanks a lot
Ecki
--
Eckhard Kosin
Kaspar-Kerll-Str. 41
D-81245 München, Germany
Tel.: (+49)(+89) 88 88 479
Tel., Fax: (+49)(+89) 835 844
mailto:Eckhard.Kosin@online.de
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] Incorrect metadata area header checksum
2008-02-03 2:18 Eckhard Kosin
@ 2008-02-03 4:15 ` David Robinson
2008-02-03 12:21 ` Alasdair G Kergon
0 siblings, 1 reply; 14+ messages in thread
From: David Robinson @ 2008-02-03 4:15 UTC (permalink / raw)
To: LVM general discussion and development
Eckhard Kosin wrote:
> Hi all,
>
> I just reordered my disk space: shrunk /dev/hda6, created a new
> partition /dev/hda10 with this new space and added /dev/hda10 to the
> existing volume group vg_uhu00. After that I could boot and all seems
> to be fine. but running vgscan I get
>
> root@uhu:/home/ecki/backup/uhu# vgscan
> Reading all physical volumes. This may take a while...
> Incorrect metadata area header checksum
> Incorrect metadata area header checksum
> Couldn't find device with uuid 'Cm961g-X5km-ifdU-r0Cp-ACmd-N9qG-XcieTR'.
> Couldn't find all physical volumes for volume group vg_uhu00.
> Incorrect metadata area header checksum
> Couldn't find device with uuid 'Cm961g-X5km-ifdU-r0Cp-ACmd-N9qG-XcieTR'.
> Couldn't find all physical volumes for volume group vg_uhu00.
> Incorrect metadata area header checksum
> Incorrect metadata area header checksum
> Couldn't find device with uuid 'Cm961g-X5km-ifdU-r0Cp-ACmd-N9qG-XcieTR'.
> Couldn't find all physical volumes for volume group vg_uhu00.
> Incorrect metadata area header checksum
> Couldn't find device with uuid 'Cm961g-X5km-ifdU-r0Cp-ACmd-N9qG-XcieTR'.
> Couldn't find all physical volumes for volume group vg_uhu00.
> Volume group "vg_uhu00" not found
>
> Some additional information:
>
> I'm running Ubuntu 6.06 (dapper)
>
> root@uhu:/home/ecki/backup/uhu# uname -a
> Linux uhu 2.6.15-51-386 #1 PREEMPT Thu Dec 6 20:20:49 UTC 2007 i686 GNU/Linux
>
> root@uhu:/home/ecki/backup/uhu# apt-show-versions lvm2
> lvm2/dapper uptodate 2.02.02-1ubuntu1.5
>
> root@uhu:/home/ecki/backup/uhu# fdisk -ul
>
> Disk /dev/hda: 40.0 GB, 40007761920 bytes
> 255 heads, 63 sectors/track, 4864 cylinders, total 78140160 sectors
> Units = sectors of 1 * 512 = 512 bytes
>
> Device Boot Start End Blocks Id System
> /dev/hda1 * 63 14329979 7164958+ 7 HPFS/NTFS
> /dev/hda2 14329980 78140159 31905090 f W95 Ext'd (LBA)
> /dev/hda5 23438898 25993169 1277136 82 Linux swap / Solaris
> /dev/hda6 25993233 38443544 6225156 83 Linux
> /dev/hda7 42556248 42684704 64228+ 83 Linux
> /dev/hda8 42684768 78140159 17727696 8e Linux LVM
> /dev/hda9 14330106 23438834 4554364+ 83 Linux
> /dev/hda10 38443608 42556184 2056288+ 83 Linux
>
> Partition table entries are not in disk order
>
> The volume group vg_uhu00 uses the partitions /dev/hda8, /dev/hda9 and
> /dev/hda10 and, I believe, but I'm not sure, /dev/hda5 (swap
> partition). I can't check, because lvdisplay gives the same output as vgscan above.
>
> I have a line
>
> filter = [ "r|/dev/cdrom|", "r|/dev/hda[1,6,7]|" ]
Is there any particular reason your modifying the filter setting?
Generally you shouldn't need to modify it, except for a few special
situations.
Can you attach /etc/lvm/archive/vg_uhu00 ? The file will contain an
entry showing which device "Cm961g-X5km-ifdU-r0Cp-ACmd-N9qG-XcieTR" is
likely to correspond to. Is it rejected by your filter setting?
If, when using the default filter setting, you still receive the
"Couldn't find device with uuid Cm961g-X5km-ifdU-r0Cp-ACmd-N9qG-XcieTR'"
messages, then you've made an error when resizing the partitions.
--Dave
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] Incorrect metadata area header checksum
2008-02-03 4:15 ` David Robinson
@ 2008-02-03 12:21 ` Alasdair G Kergon
2008-02-03 18:14 ` Eckhard Kosin
0 siblings, 1 reply; 14+ messages in thread
From: Alasdair G Kergon @ 2008-02-03 12:21 UTC (permalink / raw)
To: LVM general discussion and development
On Sun, Feb 03, 2008 at 02:15:02PM +1000, David Robinson wrote:
> Eckhard Kosin wrote:
> >lvm2/dapper uptodate 2.02.02-1ubuntu1.5
> > filter = [ "r|/dev/cdrom|", "r|/dev/hda[1,6,7]|" ]
> Is there any particular reason your modifying the filter setting?
> Generally you shouldn't need to modify it, except for a few special
> situations.
Indeed, that line is of course wrong if you now *intend* to use hda10!
('$' missing.)
(And upgrade 2.02.02 to something more recent.)
Alasdair
--
agk@redhat.com
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] Incorrect metadata area header checksum
2008-02-03 12:21 ` Alasdair G Kergon
@ 2008-02-03 18:14 ` Eckhard Kosin
2008-02-04 3:06 ` David Brown
0 siblings, 1 reply; 14+ messages in thread
From: Eckhard Kosin @ 2008-02-03 18:14 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 2482 bytes --]
Hi Dave, hi Alasdair,
thank you for your quick answer.
Am Sonntag, den 03.02.2008, 12:21 +0000 schrieb Alasdair G Kergon:
> On Sun, Feb 03, 2008 at 02:15:02PM +1000, David Robinson wrote:
> > Eckhard Kosin wrote:
> > >lvm2/dapper uptodate 2.02.02-1ubuntu1.5
>
> > > filter = [ "r|/dev/cdrom|", "r|/dev/hda[1,6,7]|" ]
>
> > Is there any particular reason your modifying the filter setting?
> > Generally you shouldn't need to modify it, except for a few special
> > situations.
I exclude hda1 (NTFS), hda6 (old LINUX) and hda7 (/boot), because they
are not under LVM. Without exclusion by a filter rule scanning of these
devices produce the "Incorrect metadata area header checksum" message.
>
> Indeed, that line is of course wrong if you now *intend* to use hda10!
> ('$' missing.)
Indeed, I should have read the documentation included in lvm.conf,
sorry.
I just changed the filter rule to
filter = [ "r|/dev/cdrom|", "r|^/dev/hda[1,6,7]$|" ]
Now, hda10 will be found, but I get the "Incorrect metadata area header
checksum" complain from the scanning of hda7: A citation from the
output of "vgdisplay -vvv":
...
Opened /dev/hda7 RO O_DIRECT
/dev/hda7: block size is 1024 bytes
/dev/hda7: lvm2 label detected
Closed /dev/hda7
lvmcache: /dev/hda7 now orphaned
Opened /dev/hda7 RO O_DIRECT
/dev/hda7: block size is 1024 bytes
Incorrect metadata area header checksum
Closed /dev/hda7
Opened /dev/vg_uhu00/var_cache_rsnapshot RO O_DIRECT
...
Opened /dev/evms/hda7 RO O_DIRECT
/dev/evms/hda7: block size is 512 bytes
/dev/evms/hda7: lvm2 label detected
Closed /dev/evms/hda7
Duplicate PV EWhly9hF5TcIV1KyHfXVUQDBe7S1uf14 on /dev/hda7 - using
dm /dev/evms/hda7
Opened /dev/evms/hda7 RO O_DIRECT
/dev/evms/hda7: block size is 512 bytes
Incorrect metadata area header checksum
Closed /dev/evms/hda7
...
I append the complete output of "vgdisplay -vvv".
Why hda7 is scanned in spite of the filter rule? And what's
about /dev/evms/hda7 ? I have to exclude it by its own filter rule?
Thank you
Ecki
>
> (And upgrade 2.02.02 to something more recent.)
>
> Alasdair
--
Eckhard Kosin
Kaspar-Kerll-Str. 41
D-81245 München, Germany
Tel.: (+49)(+89) 88 88 479
Tel., Fax: (+49)(+89) 835 844
mailto:Eckhard.Kosin@online.de
[-- Attachment #2: vgdisplay.out --]
[-- Type: text/plain, Size: 18241 bytes --]
Processing: vgdisplay -vvv
O_DIRECT will be used
Setting global/locking_type to 1
Setting global/locking_dir to /var/lock/lvm
File-based locking enabled.
Finding all volume groups
/dev/ram0: Not using O_DIRECT
Opened /dev/ram0 RO
/dev/ram0: block size is 1024 bytes
/dev/ram0: No label detected
Closed /dev/ram0
Opened /dev/dm RO O_DIRECT
/dev/dm: block size is 1024 bytes
/dev/dm: No label detected
Closed /dev/dm
/dev/ram1: Not using O_DIRECT
Opened /dev/ram1 RO
/dev/ram1: block size is 1024 bytes
/dev/ram1: No label detected
Closed /dev/ram1
Opened /dev/hda1 RO O_DIRECT
/dev/hda1: block size is 512 bytes
/dev/hda1: No label detected
Closed /dev/hda1
Opened /dev/vg_uhu00/home RO O_DIRECT
/dev/vg_uhu00/home: block size is 4096 bytes
/dev/vg_uhu00/home: No label detected
Closed /dev/vg_uhu00/home
/dev/ram2: Not using O_DIRECT
Opened /dev/ram2 RO
/dev/ram2: block size is 1024 bytes
/dev/ram2: No label detected
Closed /dev/ram2
Opened /dev/vg_uhu00/usr RO O_DIRECT
/dev/vg_uhu00/usr: block size is 1024 bytes
/dev/vg_uhu00/usr: No label detected
Closed /dev/vg_uhu00/usr
/dev/ram3: Not using O_DIRECT
Opened /dev/ram3 RO
/dev/ram3: block size is 1024 bytes
/dev/ram3: No label detected
Closed /dev/ram3
Opened /dev/vg_uhu00/opt RO O_DIRECT
/dev/vg_uhu00/opt: block size is 1024 bytes
/dev/vg_uhu00/opt: No label detected
Closed /dev/vg_uhu00/opt
/dev/ram4: Not using O_DIRECT
Opened /dev/ram4 RO
/dev/ram4: block size is 1024 bytes
/dev/ram4: No label detected
Closed /dev/ram4
Opened /dev/vg_uhu00/tmp RO O_DIRECT
/dev/vg_uhu00/tmp: block size is 1024 bytes
/dev/vg_uhu00/tmp: No label detected
Closed /dev/vg_uhu00/tmp
/dev/ram5: Not using O_DIRECT
Opened /dev/ram5 RO
/dev/ram5: block size is 1024 bytes
/dev/ram5: No label detected
Closed /dev/ram5
Opened /dev/hda5 RO O_DIRECT
/dev/hda5: block size is 4096 bytes
/dev/hda5: No label detected
Closed /dev/hda5
Opened /dev/vg_uhu00/var RO O_DIRECT
/dev/vg_uhu00/var: block size is 1024 bytes
/dev/vg_uhu00/var: No label detected
Closed /dev/vg_uhu00/var
/dev/ram6: Not using O_DIRECT
Opened /dev/ram6 RO
/dev/ram6: block size is 1024 bytes
/dev/ram6: No label detected
Closed /dev/ram6
Opened /dev/hda6 RO O_DIRECT
/dev/hda6: block size is 4096 bytes
/dev/hda6: No label detected
Closed /dev/hda6
Opened /dev/vg_uhu00/usr_local RO O_DIRECT
/dev/vg_uhu00/usr_local: block size is 1024 bytes
/dev/vg_uhu00/usr_local: No label detected
Closed /dev/vg_uhu00/usr_local
/dev/ram7: Not using O_DIRECT
Opened /dev/ram7 RO
/dev/ram7: block size is 1024 bytes
/dev/ram7: No label detected
Closed /dev/ram7
Opened /dev/hda7 RO O_DIRECT
/dev/hda7: block size is 1024 bytes
/dev/hda7: lvm2 label detected
Closed /dev/hda7
lvmcache: /dev/hda7 now orphaned
Opened /dev/hda7 RO O_DIRECT
/dev/hda7: block size is 1024 bytes
Incorrect metadata area header checksum
Closed /dev/hda7
Opened /dev/vg_uhu00/var_cache_rsnapshot RO O_DIRECT
/dev/vg_uhu00/var_cache_rsnapshot: block size is 4096 bytes
/dev/vg_uhu00/var_cache_rsnapshot: No label detected
Closed /dev/vg_uhu00/var_cache_rsnapshot
/dev/ram8: Not using O_DIRECT
Opened /dev/ram8 RO
/dev/ram8: block size is 1024 bytes
/dev/ram8: No label detected
Closed /dev/ram8
Opened /dev/hda8 RO O_DIRECT
/dev/hda8: block size is 4096 bytes
/dev/hda8: lvm2 label detected
Closed /dev/hda8
lvmcache: /dev/hda8 now orphaned
Opened /dev/hda8 RO O_DIRECT
/dev/hda8: block size is 4096 bytes
Closed /dev/hda8
lvmcache: /dev/hda8 now in VG vg_uhu00
Opened /dev/evms/hda1 RO O_DIRECT
/dev/evms/hda1: block size is 512 bytes
/dev/evms/hda1: No label detected
Closed /dev/evms/hda1
/dev/ram9: Not using O_DIRECT
Opened /dev/ram9 RO
/dev/ram9: block size is 1024 bytes
/dev/ram9: No label detected
Closed /dev/ram9
Opened /dev/hda9 RO O_DIRECT
/dev/hda9: block size is 512 bytes
/dev/hda9: lvm2 label detected
Closed /dev/hda9
lvmcache: /dev/hda9 now orphaned
Opened /dev/hda9 RO O_DIRECT
/dev/hda9: block size is 512 bytes
Closed /dev/hda9
lvmcache: /dev/hda9 now in VG vg_uhu00
Opened /dev/evms/hda5 RO O_DIRECT
/dev/evms/hda5: block size is 4096 bytes
/dev/evms/hda5: No label detected
Closed /dev/evms/hda5
/dev/ram10: Not using O_DIRECT
Opened /dev/ram10 RO
/dev/ram10: block size is 1024 bytes
/dev/ram10: No label detected
Closed /dev/ram10
Opened /dev/hda10 RO O_DIRECT
/dev/hda10: block size is 512 bytes
/dev/hda10: lvm2 label detected
Closed /dev/hda10
lvmcache: /dev/hda10 now orphaned
Opened /dev/hda10 RO O_DIRECT
/dev/hda10: block size is 512 bytes
Closed /dev/hda10
lvmcache: /dev/hda10 now in VG vg_uhu00
Opened /dev/evms/hda6 RO O_DIRECT
/dev/evms/hda6: block size is 4096 bytes
/dev/evms/hda6: No label detected
Closed /dev/evms/hda6
/dev/ram11: Not using O_DIRECT
Opened /dev/ram11 RO
/dev/ram11: block size is 1024 bytes
/dev/ram11: No label detected
Closed /dev/ram11
Opened /dev/evms/hda7 RO O_DIRECT
/dev/evms/hda7: block size is 512 bytes
/dev/evms/hda7: lvm2 label detected
Closed /dev/evms/hda7
Duplicate PV EWhly9hF5TcIV1KyHfXVUQDBe7S1uf14 on /dev/hda7 - using dm /dev/evms/hda7
Opened /dev/evms/hda7 RO O_DIRECT
/dev/evms/hda7: block size is 512 bytes
Incorrect metadata area header checksum
Closed /dev/evms/hda7
/dev/ram12: Not using O_DIRECT
Opened /dev/ram12 RO
/dev/ram12: block size is 1024 bytes
/dev/ram12: No label detected
Closed /dev/ram12
/dev/ram13: Not using O_DIRECT
Opened /dev/ram13 RO
/dev/ram13: block size is 1024 bytes
/dev/ram13: No label detected
Closed /dev/ram13
/dev/ram14: Not using O_DIRECT
Opened /dev/ram14 RO
/dev/ram14: block size is 1024 bytes
/dev/ram14: No label detected
Closed /dev/ram14
/dev/ram15: Not using O_DIRECT
Opened /dev/ram15 RO
/dev/ram15: block size is 1024 bytes
/dev/ram15: No label detected
Closed /dev/ram15
Opened /dev/evms/lvm2/vg_uhu00/home RO O_DIRECT
/dev/evms/lvm2/vg_uhu00/home: block size is 4096 bytes
/dev/evms/lvm2/vg_uhu00/home: No label detected
Closed /dev/evms/lvm2/vg_uhu00/home
Opened /dev/evms/lvm2/vg_uhu00/opt RO O_DIRECT
/dev/evms/lvm2/vg_uhu00/opt: block size is 4096 bytes
/dev/evms/lvm2/vg_uhu00/opt: No label detected
Closed /dev/evms/lvm2/vg_uhu00/opt
Opened /dev/evms/lvm2/vg_uhu00/root RO O_DIRECT
/dev/evms/lvm2/vg_uhu00/root: block size is 4096 bytes
/dev/evms/lvm2/vg_uhu00/root: No label detected
Closed /dev/evms/lvm2/vg_uhu00/root
Opened /dev/evms/lvm2/vg_uhu00/tmp RO O_DIRECT
/dev/evms/lvm2/vg_uhu00/tmp: block size is 4096 bytes
/dev/evms/lvm2/vg_uhu00/tmp: No label detected
Closed /dev/evms/lvm2/vg_uhu00/tmp
Opened /dev/evms/lvm2/vg_uhu00/usr RO O_DIRECT
/dev/evms/lvm2/vg_uhu00/usr: block size is 4096 bytes
/dev/evms/lvm2/vg_uhu00/usr: No label detected
Closed /dev/evms/lvm2/vg_uhu00/usr
Opened /dev/evms/lvm2/vg_uhu00/usr_local RO O_DIRECT
/dev/evms/lvm2/vg_uhu00/usr_local: block size is 4096 bytes
/dev/evms/lvm2/vg_uhu00/usr_local: No label detected
Closed /dev/evms/lvm2/vg_uhu00/usr_local
Opened /dev/evms/lvm2/vg_uhu00/var RO O_DIRECT
/dev/evms/lvm2/vg_uhu00/var: block size is 4096 bytes
/dev/evms/lvm2/vg_uhu00/var: No label detected
Closed /dev/evms/lvm2/vg_uhu00/var
Opened /dev/evms/lvm2/vg_uhu00/var_cache_rsnapshot RO O_DIRECT
/dev/evms/lvm2/vg_uhu00/var_cache_rsnapshot: block size is 4096 bytes
/dev/evms/lvm2/vg_uhu00/var_cache_rsnapshot: No label detected
Closed /dev/evms/lvm2/vg_uhu00/var_cache_rsnapshot
Locking /var/lock/lvm/V_vg_uhu00 RB
Finding volume group "vg_uhu00"
Opened /dev/hda8 RO O_DIRECT
/dev/hda8: block size is 4096 bytes
/dev/hda8: lvm2 label detected
Opened /dev/hda9 RO O_DIRECT
/dev/hda9: block size is 512 bytes
/dev/hda9: lvm2 label detected
Opened /dev/hda10 RO O_DIRECT
/dev/hda10: block size is 512 bytes
/dev/hda10: lvm2 label detected
/dev/hda8: lvm2 label detected
/dev/hda9: lvm2 label detected
/dev/hda10: lvm2 label detected
Read vg_uhu00 metadata (932) from /dev/hda8 at 113152 size 4207
/dev/hda8: lvm2 label detected
/dev/hda9: lvm2 label detected
/dev/hda10: lvm2 label detected
Read vg_uhu00 metadata (932) from /dev/hda9 at 83968 size 4207
/dev/hda8: lvm2 label detected
/dev/hda9: lvm2 label detected
/dev/hda10: lvm2 label detected
Read vg_uhu00 metadata (932) from /dev/hda10 at 27648 size 4207
/dev/hda8 0: 0 256: root(0:0)
/dev/hda8 1: 256 1024: home(0:0)
/dev/hda8 2: 1280 512: usr(0:0)
/dev/hda8 3: 1792 256: opt(0:0)
/dev/hda8 4: 2048 125: home(1024:0)
/dev/hda8 5: 2173 256: tmp(0:0)
/dev/hda8 6: 2429 238: var(0:0)
/dev/hda8 7: 2667 18: var_cache_rsnapshot(1917:0)
/dev/hda8 8: 2685 125: usr_local(0:0)
/dev/hda8 9: 2810 256: usr(512:0)
/dev/hda8 10: 3066 1024: var_cache_rsnapshot(0:0)
/dev/hda8 11: 4090 131: home(1149:0)
/dev/hda8 12: 4221 106: var_cache_rsnapshot(1024:0)
/dev/hda9 0: 0 662: var_cache_rsnapshot(1130:0)
/dev/hda9 1: 662 188: tmp(256:0)
/dev/hda9 2: 850 256: home(1280:0)
/dev/hda9 3: 1106 5: var_cache_rsnapshot(1935:0)
/dev/hda10 0: 0 188: root(256:0)
/dev/hda10 1: 188 188: usr(768:0)
/dev/hda10 2: 376 125: var_cache_rsnapshot(1792:0)
Getting device info for vg_uhu00-root
dm version O [16384]
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy3CdjOet96DpeOAbPH3OBwU1R0Xy5xpEPu O [16384]
Getting device info for vg_uhu00-home
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy3HX3ggwEt0JbuLgE9blJgOqUuw5jF7Wfz O [16384]
Getting device info for vg_uhu00-usr
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy38Aat6okCW4jqrSWodNAj7acpMcE2DMBB O [16384]
Getting device info for vg_uhu00-opt
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy3BaODGoOT2Q4ts6Wah8ZQgfePJu0R1RSc O [16384]
Getting device info for vg_uhu00-tmp
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy3Biwcuzhqr0WpHoFRFlZs8rhyn78X4nUY O [16384]
Getting device info for vg_uhu00-var
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy3mqx311fiVLVtuwAPppqZfnWXuJewscVK O [16384]
Getting device info for vg_uhu00-usr_local
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy3FjRudmLntxcpdBxZhv5bQ566WL68fu34 O [16384]
Getting device info for vg_uhu00-var_cache_rsnapshot
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy38HzIxKK6m6Ww4c5DijvKAF0Q6DkM8WIq O [16384]
Getting device info for vg_uhu00-root
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy3CdjOet96DpeOAbPH3OBwU1R0Xy5xpEPu O [16384]
Getting device info for vg_uhu00-home
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy3HX3ggwEt0JbuLgE9blJgOqUuw5jF7Wfz O [16384]
Getting device info for vg_uhu00-usr
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy38Aat6okCW4jqrSWodNAj7acpMcE2DMBB O [16384]
Getting device info for vg_uhu00-opt
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy3BaODGoOT2Q4ts6Wah8ZQgfePJu0R1RSc O [16384]
Getting device info for vg_uhu00-tmp
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy3Biwcuzhqr0WpHoFRFlZs8rhyn78X4nUY O [16384]
Getting device info for vg_uhu00-var
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy3mqx311fiVLVtuwAPppqZfnWXuJewscVK O [16384]
Getting device info for vg_uhu00-usr_local
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy3FjRudmLntxcpdBxZhv5bQ566WL68fu34 O [16384]
Getting device info for vg_uhu00-var_cache_rsnapshot
dm info LVM-ZpY3wxDq1NlG2byJ5NqXZQ7C32MX1Hy38HzIxKK6m6Ww4c5DijvKAF0Q6DkM8WIq O [16384]
--- Volume group ---
VG Name vg_uhu00
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 932
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 8
Open LV 8
Max PV 0
Cur PV 3
Act PV 3
VG Size 23,20 GB
PE Size 4,00 MB
Total PE 5939
Alloc PE / Size 5939 / 23,20 GB
Free PE / Size 0 / 0
VG UUID ZpY3wx-Dq1N-lG2b-yJ5N-qXZQ-7C32-MX1Hy3
--- Logical volume ---
LV Name /dev/vg_uhu00/root
VG Name vg_uhu00
LV UUID CdjOet-96Dp-eOAb-PH3O-BwU1-R0Xy-5xpEPu
LV Write Access read/write
LV Status available
# open 1
LV Size 1,73 GB
Current LE 444
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 254:0
--- Logical volume ---
LV Name /dev/vg_uhu00/home
VG Name vg_uhu00
LV UUID HX3ggw-Et0J-buLg-E9bl-JgOq-Uuw5-jF7Wfz
LV Write Access read/write
LV Status available
# open 1
LV Size 6,00 GB
Current LE 1536
Segments 4
Allocation inherit
Read ahead sectors 0
Block device 254:1
--- Logical volume ---
LV Name /dev/vg_uhu00/usr
VG Name vg_uhu00
LV UUID 8Aat6o-kCW4-jqrS-WodN-Aj7a-cpMc-E2DMBB
LV Write Access read/write
LV Status available
# open 1
LV Size 3,73 GB
Current LE 956
Segments 3
Allocation inherit
Read ahead sectors 0
Block device 254:2
--- Logical volume ---
LV Name /dev/vg_uhu00/opt
VG Name vg_uhu00
LV UUID BaODGo-OT2Q-4ts6-Wah8-ZQgf-ePJu-0R1RSc
LV Write Access read/write
LV Status available
# open 1
LV Size 1,00 GB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:3
--- Logical volume ---
LV Name /dev/vg_uhu00/tmp
VG Name vg_uhu00
LV UUID Biwcuz-hqr0-WpHo-FRFl-Zs8r-hyn7-8X4nUY
LV Write Access read/write
LV Status available
# open 1
LV Size 1,73 GB
Current LE 444
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 254:4
--- Logical volume ---
LV Name /dev/vg_uhu00/var
VG Name vg_uhu00
LV UUID mqx311-fiVL-Vtuw-APpp-qZfn-WXuJ-ewscVK
LV Write Access read/write
LV Status available
# open 1
LV Size 952,00 MB
Current LE 238
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:5
--- Logical volume ---
LV Name /dev/vg_uhu00/usr_local
VG Name vg_uhu00
LV UUID FjRudm-Lntx-cpdB-xZhv-5bQ5-66WL-68fu34
LV Write Access read/write
LV Status available
# open 1
LV Size 500,00 MB
Current LE 125
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:6
--- Logical volume ---
LV Name /dev/vg_uhu00/var_cache_rsnapshot
VG Name vg_uhu00
LV UUID 8HzIxK-K6m6-Ww4c-5Dij-vKAF-0Q6D-kM8WIq
LV Write Access read/write
LV Status available
# open 1
LV Size 7,58 GB
Current L Read volume group vg_uhu00 from /etc/lvm/backup/vg_uhu00
Unlocking /var/lock/lvm/V_vg_uhu00
Closed /dev/hda8
Closed /dev/hda9
Closed /dev/hda10
E 1940
Segments 6
Allocation inherit
Read ahead sectors 0
Block device 254:7
--- Physical volumes ---
PV Name /dev/hda8
PV UUID ynwP5s-t6S7-3eus-0qNj-xYpm-IMCJ-TsfuDU
PV Status allocatable
Total PE / Free PE 4327 / 0
PV Name /dev/hda9
PV UUID MvqlEq-7tZf-pZfP-EU3m-bHyQ-xs3Q-ov8cBR
PV Status allocatable
Total PE / Free PE 1111 / 0
PV Name /dev/hda10
PV UUID Cm961g-X5km-ifdU-r0Cp-ACmd-N9qG-XcieTR
PV Status allocatable
Total PE / Free PE 501 / 0
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] Incorrect metadata area header checksum
2008-02-03 18:14 ` Eckhard Kosin
@ 2008-02-04 3:06 ` David Brown
2008-02-04 10:36 ` Alasdair G Kergon
0 siblings, 1 reply; 14+ messages in thread
From: David Brown @ 2008-02-04 3:06 UTC (permalink / raw)
To: LVM general discussion and development
On Sun, Feb 03, 2008 at 07:14:24PM +0100, Eckhard Kosin wrote:
>I just changed the filter rule to
>
> filter = [ "r|/dev/cdrom|", "r|^/dev/hda[1,6,7]$|" ]
>
>Now, hda10 will be found, but I get the "Incorrect metadata area header
>checksum" complain from the scanning of hda7: A citation from the
>output of "vgdisplay -vvv":
>
>...
> Opened /dev/hda7 RO O_DIRECT
> /dev/hda7: block size is 1024 bytes
> /dev/hda7: lvm2 label detected
> Closed /dev/hda7
> lvmcache: /dev/hda7 now orphaned
> Opened /dev/hda7 RO O_DIRECT
> /dev/hda7: block size is 1024 bytes
> Incorrect metadata area header checksum
> Closed /dev/hda7
> Opened /dev/vg_uhu00/var_cache_rsnapshot RO O_DIRECT
Ok, the problem here isn't that you should have to create filter rules to
eliminate these volumes, but that it thinks it is finding an lvm2 label on
the volume.
I'm going to guess that at some point, these partitions had an LVM label on
them, and then a regular filesystem was placed on them. Several
filesystems don't overwrite the first 1k or so of the volume, so the label
isn't going to be obliterated, just corrupted.
Can you do something like:
dd if=/dev/hda7 bs=1k count=1 | hexdump -C
You'll probably find that there is a lvm2 label there. I'm not sure of a
safe way of elminiating it.
David
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] Incorrect metadata area header checksum
2008-02-04 3:06 ` David Brown
@ 2008-02-04 10:36 ` Alasdair G Kergon
0 siblings, 0 replies; 14+ messages in thread
From: Alasdair G Kergon @ 2008-02-04 10:36 UTC (permalink / raw)
To: LVM general discussion and development
On Sun, Feb 03, 2008 at 07:06:48PM -0800, David Brown wrote:
> You'll probably find that there is a lvm2 label there. I'm not sure of a
> safe way of elminiating it.
man pvremove
And you really should upgrade lvm2 - that's a 2-year old version with numerous
known bugs in it.
Alasdair
--
agk@redhat.com
^ permalink raw reply [flat|nested] 14+ messages in thread
* [linux-lvm] Incorrect metadata area header checksum
@ 2014-09-19 3:33 Boylan, Ross
0 siblings, 0 replies; 14+ messages in thread
From: Boylan, Ross @ 2014-09-19 3:33 UTC (permalink / raw)
To: linux-lvm@redhat.com
While doing some work on my system I added a new disk, GPT partitioned it, and created a new VG "mongo" out of the big partition (~1TB). After various operations detailed below, and a few hours of apparent success, things started to go wrong. My root file system (/, not just /root), in my other VG "turtle", experienced read problems and was remounted read-only. The new filesystem on mongo also became unreadable. vgsan and other LVM commands, which had been happy, started reporting
Incorrect metadata area header checksum
although they still reported info on turtle--but not mongo.
First question: If vgdisplay turtle displays the incorrect metadata message, is that a sure sign that turtle's metadata is bad, or could it be from mongo?
At first I thought it meant both VG's (on 2 separate disks) had failed, but now I'm not so sure--I've been able to reboot with turtle, though nothing in mongo is accessible.
Second question: what could cause the problem(s)?
Behind these question I'm wondering what state my system is in, and whether this indicates LVM is unsafe to use in the way I do. It's worked great before this. I think I have to reinstall and restore from backups, since bad things were happening to my filesystems.
Thanks.
Ross Boylan
Details:
For both VG's I allocated all free space and wiped it:
lvcreate -l 100%FREE -n tozero turtle
cryptsetup --key-file /etc/crypt/big1ah zero_crypt /dev/turtle/tozero
dd if=/dev/zero of=/dev/mapper/zero_crypt
I then freed the LV's so created and added some of the resulting free space to other LV's.
It seems possible that this may have stressed LVM too far. "turtle" also had active snapshots (no thin provisioning).
The growth was considerable, e.g., from 20G to 40G. Maybe the block size changed? But I made no changes to the root filesystem, and that's what failed first.
The necessary crypto headers disappeared from some of the LV's, although now that I've rebooted they seem to be back (?) for turtle. The RAID headers tested fine throughout. It looks as if the pv for mongo is still recognized, even though the VG is not:
# date; pvdisplay
Thu Sep 18 20:28:22 PDT 2014
Incorrect metadata area header checksum
--- Physical volume ---
PV Name /dev/md1
VG Name turtle
PV Size 696.68 GB / not usable 2.00 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 178350
Free PE 24932
Allocated PE 153418
PV UUID 3cc3d1-tvjW-ZVwP-Gegj-NKF3-S2bA-AEQ59e
"/dev/sda2" is a new physical volume of "931.51 GB"
--- NEW Physical volume ---
PV Name /dev/sda2
VG Name
PV Size 931.51 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID iTdabJ-Unml-Qs4h-wIQE-cpo0-7nWQ-tzRlCU
0.90 metadata for RAID.
VG mongo is made of one partition on one physical disk
VG turtle is made of one software RAID-1 disk; the RAID is made of GPT partitions on 2 disks.
The one LV on mongo had crypt (in the cryptsetup sense) on it, and many of the LV's on turtle (including root) used crypto
Debian Lenny (very old--I was getting ready to upgrade) on amd64.
linux kernel 2.6.32-5-amd64 (which is newer than Lenny)
lvm2 2.02.39-8 The Linux Logical Volume Manager
cryptsetup 2:1.0.6-7 (ignore the 2: prefix; it's 1.0.6) configures encrypted block devices
mdadm 2.6.7.2-3 tool to administer Linux MD arrays (software RAID)
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2014-09-19 3:33 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-05-12 0:44 [linux-lvm] incorrect metadata area header checksum Geoff
-- strict thread matches above, loose matches on Subject: below --
2014-09-19 3:33 [linux-lvm] Incorrect " Boylan, Ross
2008-02-03 2:18 Eckhard Kosin
2008-02-03 4:15 ` David Robinson
2008-02-03 12:21 ` Alasdair G Kergon
2008-02-03 18:14 ` Eckhard Kosin
2008-02-04 3:06 ` David Brown
2008-02-04 10:36 ` Alasdair G Kergon
2006-11-02 13:38 C'est Pierre
2006-11-02 22:02 ` Luca Berra
2006-03-11 2:43 Bernard Fay
2005-10-18 12:24 Eric S. Johansson
2003-06-02 9:34 jeff
2003-06-02 10:23 ` Alasdair G Kergon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).