From: Ludovic Drolez <ldrolez@linbox.com>
To: linux-lvm@redhat.com
Subject: [linux-lvm] LVM2/DM data corruption during write with 2.6.11.12
Date: Thu, 08 Sep 2005 12:31:27 +0200 [thread overview]
Message-ID: <432012FF.9080602@linbox.com> (raw)
Hi !
We are developing a (GPLed) disk cloning software similar to partimage: it's an
intelligent 'dd' which backups only used sectors. Project info and SVN available
at http://lrs.linbox.org
Recently I added LVM1/2 support to it, and sometimes we saw LVM restorations
failing randomly (Disk images from RHES are not corrupted, but the result of the
restoration can be lead to a corrupted filesystem). If a restoration fails, just
try another one and it will work...
How the restoration program works:
- I restore the LVM2 administrative data (384 sectors, most of the time),
- I 'vgscan', 'vgchange',
- open for writing the '/dev/dm-xxxx',
- read a compressed file over NFS,
- and put the sectors in place, so it's a succession of '_llseek()' and
'write()' to the DM device.
But, *sometimes*, for example, the current seek position is at 9GB, and some
data is written to sector 0 ! It happens randomly.
Here is a typical strace of a restoration:
write(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512) = 512
write(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512) = 512
_llseek(5, 20963328, [37604032512], SEEK_CUR) = 0
_llseek(5, 0, [37604032512], SEEK_CUR) = 0
_llseek(5, 2097152, [37606129664], SEEK_CUR) = 0
write(5, "\1\335E\0\f\0\1\2.\0\0\0\2\0\0\0\364\17\2\2..\0\0\0\0\0"..., 512) = 51
2
write(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512) = 512
write(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512) = 512
write(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512) = 512
write(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512) = 512
write(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512) = 512
write(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512) = 512
write(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512) = 512
_llseek(5, 88076288, [37694210048], SEEK_CUR) = 0
_llseek(5, 0, [37694210048], SEEK_CUR) = 0
_llseek(5, 20971520, [37715181568], SEEK_CUR) = 0
write(5, "\377\377\377\377\377\377\377\377\377\377\377\377\377\377"..., 512) = 5
12
....
....
As you can see, there are no seeks to sector 0, but something randomly write
some data to sector 0 !
I could reproduce these random problems on different kind of PCs.
But, the strace above comes from an improved version, which aggregates
'_llseek's. A previous version, which did *many* 512 bytes seeks had much more
problems. Aggregating seeks made the corruption to appears very rarely... And I
more likely to happen, for a 40GB restoration than for a 10GB one.
So less system calls to the DM/LVM2 layer seems to give less corruption probability.
Any ideas ? Newer kernel releases could have fixed such a problem ?
--
Ludovic DROLEZ Linbox / Free&ALter Soft
www.linbox.com www.linbox.org
reply other threads:[~2005-09-08 10:31 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=432012FF.9080602@linbox.com \
--to=ldrolez@linbox.com \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).