* [linux-lvm] multiple snapshots
@ 2003-11-26 6:10 Stefan Majer
0 siblings, 0 replies; 8+ messages in thread
From: Stefan Majer @ 2003-11-26 6:10 UTC (permalink / raw)
To: linux-lvm
Hi,
Im currently working on a samba fileserver setup based on LVM2 on top of
kernel 2.4.22.
Our customer now wants to keep several snapshots (one every day) from
the main share for backup purpose. He wants to keep up to two weeks
online, that leads in 14 different snapshots for one source LV.
All of these snapshot LVs should be mountet.
The Source LV is about 1TB in Size, we have about 300Gb for snapshots
free.
Is this possible ?, did anyone else something like this before ?, how
big would the performance impact be ?
If someone can give me some hints,numbers
all would be welcome.
greetings
Stefan Majer
^ permalink raw reply [flat|nested] 8+ messages in thread
* [linux-lvm] multiple snapshots
@ 2005-08-15 11:50 Imre Gergely
2005-08-16 6:37 ` Imre Gergely
0 siblings, 1 reply; 8+ messages in thread
From: Imre Gergely @ 2005-08-15 11:50 UTC (permalink / raw)
To: linux-lvm
hi
i tried to create 2 snapshots of a LV, and i got in trouble ;) with one
snapshot there are apparently no problems, with two the kernel gives me
random oops-es and hangs. first, i tried with the root LV, then i tried
with a simple partition, the result were the same. i created the two
snapshots, after that i did a vgdisplay, there was everything alright, i
mounted the origin LV, i copied some files, i watched the percentage
increasing on the snapshots, then i did a 'ls -la' on the mounted
directory and then the kernel said bye-bye... i don't see any kernel
panic, just the usual register and memory dump, and the hang.
any ideas? don't tell me i cannot have more than one snapshot on one
given LV.
thx.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] multiple snapshots
2005-08-15 11:50 Imre Gergely
@ 2005-08-16 6:37 ` Imre Gergely
2005-08-16 7:17 ` Imre Gergely
0 siblings, 1 reply; 8+ messages in thread
From: Imre Gergely @ 2005-08-16 6:37 UTC (permalink / raw)
To: LVM general discussion and development
mmm, i forgot:
[root@fc2 root]# uname -r
2.6.12.5
lvm> version
LVM version: 2.01.13-cvs (2005-06-14)
Library version: 1.01.01 (2005-03-29)
Driver version: 4.4.0
Imre Gergely wrote:
> hi
>
> i tried to create 2 snapshots of a LV, and i got in trouble ;) with one
> snapshot there are apparently no problems, with two the kernel gives me
> random oops-es and hangs. first, i tried with the root LV, then i tried
> with a simple partition, the result were the same. i created the two
> snapshots, after that i did a vgdisplay, there was everything alright, i
> mounted the origin LV, i copied some files, i watched the percentage
> increasing on the snapshots, then i did a 'ls -la' on the mounted
> directory and then the kernel said bye-bye... i don't see any kernel
> panic, just the usual register and memory dump, and the hang.
>
> any ideas? don't tell me i cannot have more than one snapshot on one
> given LV.
>
> thx.
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
--
Gergely Imre
SysAdmin
Astral Telecom
http://www.nextra.ro/gimre
GPG key: 0x34525305 (www.keyserver.net)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] multiple snapshots
2005-08-16 6:37 ` Imre Gergely
@ 2005-08-16 7:17 ` Imre Gergely
2005-08-23 13:33 ` Imre Gergely
0 siblings, 1 reply; 8+ messages in thread
From: Imre Gergely @ 2005-08-16 7:17 UTC (permalink / raw)
To: LVM general discussion and development
i've got a log entry for one of the oops-es:
Unable to handle kernel paging request at virtual address 00100108
printing eip:
c025e142
*pde = 00000000
Oops: 0000 [#1]
PREEMPT
Modules linked in: pcnet32
CPU: 0
EIP: 0060:[<c025e142>] Not tainted VLI
EFLAGS: 00010213 (2.6.12.5)
EIP is at __origin_write+0x62/0x230
eax: 001000e0 ebx: c59a9c60 ecx: c7c915c0 edx: ffff0001
esi: 001000e0 edi: c568a7cc ebp: 00000000 esp: c7189cd0
ds: 007b es: 007b ss: 0068
Process kjournald (pid: 991, threadinfo=c7188000 task=c7c915c0)
Stack: c70b6580 c7189d04 00000001 c7189cf4 00000000 c025dbc0 c7c5fe3c
c48b4cec
00000000 c6285460 00001340 00000010 c61b6b40 c5f0c220 00002be0
00000010
c1bca9e0 c037ae7c 00000001 c32e9aec c7189da4 c025e357 c568a7cc
c1bca560
Call Trace:
[<c025dbc0>] copy_callback+0x0/0x50
[<c025e357>] do_origin+0x47/0x70
[<c0251449>] __map_bio+0x49/0x120
[<c02518c1>] __clone_and_map+0x2a1/0x2b0
[<c0127470>] autoremove_wake_function+0x0/0x60
[<c0251978>] __split_bio+0xa8/0x130
[<c0251a6e>] dm_request+0x6e/0xa0
[<c0205c67>] generic_make_request+0x147/0x1e0
[<c0127470>] autoremove_wake_function+0x0/0x60
[<c0127470>] autoremove_wake_function+0x0/0x60
[<c0127470>] autoremove_wake_function+0x0/0x60
[<c0205d62>] submit_bio+0x62/0x100
[<c0153c44>] bio_alloc_bioset+0xe4/0x1c0
[<c0153d40>] bio_alloc+0x20/0x30
[<c0153572>] submit_bh+0xd2/0x120
[<c01c2371>] journal_commit_transaction+0xd51/0x1220
[<c01048be>] do_IRQ+0x1e/0x30
[<c02b51c7>] schedule+0x347/0x5e0
[<c01c4ab6>] kjournald+0xd6/0x250
[<c0127470>] autoremove_wake_function+0x0/0x60
[<c0127470>] autoremove_wake_function+0x0/0x60
[<c01029ae>] ret_from_fork+0x6/0x14
[<c01c49c0>] commit_timeout+0x0/0x10
[<c01c49e0>] kjournald+0x0/0x250
[<c0100c31>] kernel_thread_helper+0x5/0x14
Code: 85 d2 0f 85 46 01 00 00 8b 53 18 8d 42 e8 89 c3 8b 40 18 0f 18 00
90 39 fa 75 e2 8b 44 24 1c 85 c0 0f 84 fc 00 00 00 8b 74 24 1c <8b> 46
28 ba 01 00 ff ff 0f c1 10 85 d2 0f 85 2e 05 00 00 85 ed
at this point the system still works but i cannot do reading/writing on
that mounted partition. looks like ext3's journal has some problems with
the snapshots?
i did a restart, i removed, then recreated the two snapshots, did some
reading/writing, then another oops popped up:
Aug 16 10:32:21 fc2 kernel: Unable to handle kernel NULL pointer
dereference at
virtual address 00000000
Aug 16 10:32:21 fc2 kernel: printing eip:
Aug 16 10:32:21 fc2 kernel: c025e196
Aug 16 10:32:21 fc2 kernel: *pde = 00000000
Aug 16 10:32:21 fc2 kernel: Oops: 0002 [#1]
Aug 16 10:32:21 fc2 kernel: PREEMPT
Aug 16 10:32:21 fc2 kernel: Modules linked in: pcnet32
Aug 16 10:32:21 fc2 kernel: CPU: 0
Aug 16 10:32:21 fc2 kernel: EIP: 0060:[<c025e196>] Not tainted VLI
Aug 16 10:32:21 fc2 kernel: EFLAGS: 00010246 (2.6.12.5)
Aug 16 10:32:21 fc2 kernel: EIP is at __origin_write+0xb6/0x230
Aug 16 10:32:21 fc2 kernel: eax: 00000000 ebx: c583c0e0 ecx:
c7fc2560 edx:
0000ffff
Aug 16 10:32:21 fc2 kernel: esi: c57a9b0c edi: c507498c ebp:
00000000 esp:
c7fe5b14
Aug 16 10:32:21 fc2 kernel: ds: 007b es: 007b ss: 0068
Aug 16 10:32:21 fc2 kernel: Process pdflush (pid: 8, threadinfo=c7fe4000
task=c7
fc2560)
Aug 16 10:32:21 fc2 kernel: Stack: c4c796a0 c7fe5b48 00000001 c7fe5b38
00000000
c025dbc0 c57a9b0c c7c5f1dc
Aug 16 10:32:21 fc2 kernel: 00000000 c5332dc0 000009e0 00000010
c50ed330
c7ee0520 000044e0 00000010
Aug 16 10:32:21 fc2 kernel: c73e8620 c037ae7c 00000001 c51a4dbc
c7fe5be8
c025e357 c507498c c73e85c0
Aug 16 10:32:21 fc2 kernel: Call Trace:
Aug 16 10:32:22 fc2 kernel: [<c025dbc0>] copy_callback+0x0/0x50
Aug 16 10:32:22 fc2 kernel: [<c025e357>] do_origin+0x47/0x70
Aug 16 10:32:22 fc2 kernel: [<c0251449>] __map_bio+0x49/0x120
Aug 16 10:32:22 fc2 kernel: [<c02518c1>] __clone_and_map+0x2a1/0x2b0
Aug 16 10:32:22 fc2 kernel: [<c0127470>] autoremove_wake_function+0x0/0x60
Aug 16 10:32:22 fc2 kernel: [<c0251978>] __split_bio+0xa8/0x130
Aug 16 10:32:22 fc2 kernel: [<c0251a6e>] dm_request+0x6e/0xa0
Aug 16 10:32:22 fc2 kernel: [<c0205c67>] generic_make_request+0x147/0x1e0
Aug 16 10:32:22 fc2 kernel: [<c0127470>] autoremove_wake_function+0x0/0x60
Aug 16 10:32:22 fc2 last message repeated 2 times
Aug 16 10:32:22 fc2 kernel: [<c0205d62>] submit_bio+0x62/0x100
Aug 16 10:32:22 fc2 kernel: [<c0153c44>] bio_alloc_bioset+0xe4/0x1c0
Aug 16 10:32:22 fc2 kernel: [<c0153d40>] bio_alloc+0x20/0x30
Aug 16 10:32:22 fc2 kernel: [<c0153572>] submit_bh+0xd2/0x120
Aug 16 10:32:22 fc2 kernel: [<c0151c90>]
__block_write_full_page+0x150/0x320
Aug 16 10:32:22 fc2 kernel: [<c01533dd>] block_write_full_page+0xcd/0x100
Aug 16 10:32:22 fc2 kernel: [<c01b2620>] ext3_get_block+0x0/0xa0
Aug 16 10:32:22 fc2 kernel: [<c01b3292>] ext3_ordered_writepage+0xd2/0x1c0
Aug 16 10:32:22 fc2 kernel: [<c01b2620>] ext3_get_block+0x0/0xa0
Aug 16 10:32:22 fc2 kernel: [<c01b3180>] bget_one+0x0/0x10
Aug 16 10:32:22 fc2 kernel: [<c0173842>] mpage_writepages+0x262/0x3e0
Aug 16 10:32:22 fc2 kernel: [<c01b31c0>] ext3_ordered_writepage+0x0/0x1c0
Aug 16 10:32:22 fc2 kernel: [<c013739d>] do_writepages+0x3d/0x50
Aug 16 10:32:22 fc2 kernel: [<c0171d61>] __sync_single_inode+0x71/0x210
Aug 16 10:32:22 fc2 kernel: [<c0171f67>]
__writeback_single_inode+0x67/0x150
Aug 16 10:32:22 fc2 kernel: [<c0251b70>] dm_any_congested+0x30/0x60
Aug 16 10:32:22 fc2 kernel: [<c0253e8d>] dm_table_any_congested+0x5d/0x60
Aug 16 10:32:22 fc2 kernel: [<c0251b70>] dm_any_congested+0x30/0x60
Aug 16 10:32:22 fc2 kernel: [<c01721e7>] sync_sb_inodes+0x197/0x2a0
Aug 16 10:32:22 fc2 kernel: [<c01723c4>] writeback_inodes+0xd4/0xf0
Aug 16 10:32:22 fc2 kernel: [<c0137053>] background_writeout+0x73/0xc0
Aug 16 10:32:22 fc2 kernel: [<c0137afb>] __pdflush+0xbb/0x1a0
Aug 16 10:32:22 fc2 kernel: [<c0137be0>] pdflush+0x0/0x30
Aug 16 10:32:22 fc2 kernel: [<c0137c08>] pdflush+0x28/0x30
Aug 16 10:32:22 fc2 kernel: [<c0136fe0>] background_writeout+0x0/0xc0
Aug 16 10:32:22 fc2 kernel: [<c0137be0>] pdflush+0x0/0x30
Aug 16 10:32:22 fc2 kernel: [<c0126fa5>] kthread+0xa5/0xb0
Aug 16 10:32:22 fc2 kernel: [<c0126f00>] kthread+0x0/0xb0
Aug 16 10:32:22 fc2 kernel: [<c0100c31>] kernel_thread_helper+0x5/0x14
Aug 16 10:32:22 fc2 kernel: Code: c0 0f 84 e7 00 00 00 89 48 04 8b 4c 24
5c 89 4
a 04 8b 46 2c 85 c0 0f 85 bf 00 00 00 c7 46 2c 01 00 00 00 8b 46 28 ba
ff ff 00
00 <0f> c1 10 0f 85 f0 04 00 00 8b 5e 28 8b 43 10 8b 48 10 8b 41 04
after this the same thing... i cannot read nor write to the partition,
and eventually have to reboot.
(now there's a little detail i didn't mention... this "computer" i'm
testing on is in a vmware installed fedora core 2. but i don't think
that could be a problem. or could it?)
Imre Gergely wrote:
> mmm, i forgot:
>
> [root@fc2 root]# uname -r
> 2.6.12.5
>
> lvm> version
> LVM version: 2.01.13-cvs (2005-06-14)
> Library version: 1.01.01 (2005-03-29)
> Driver version: 4.4.0
>
>
> Imre Gergely wrote:
>
>>hi
>>
>>i tried to create 2 snapshots of a LV, and i got in trouble ;) with one
>>snapshot there are apparently no problems, with two the kernel gives me
>>random oops-es and hangs. first, i tried with the root LV, then i tried
>>with a simple partition, the result were the same. i created the two
>>snapshots, after that i did a vgdisplay, there was everything alright, i
>>mounted the origin LV, i copied some files, i watched the percentage
>>increasing on the snapshots, then i did a 'ls -la' on the mounted
>>directory and then the kernel said bye-bye... i don't see any kernel
>>panic, just the usual register and memory dump, and the hang.
>>
>>any ideas? don't tell me i cannot have more than one snapshot on one
>>given LV.
>>
>>thx.
>>
>>_______________________________________________
>>linux-lvm mailing list
>>linux-lvm@redhat.com
>>https://www.redhat.com/mailman/listinfo/linux-lvm
>>read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [linux-lvm] multiple snapshots
2005-08-16 7:17 ` Imre Gergely
@ 2005-08-23 13:33 ` Imre Gergely
2005-08-23 17:13 ` Micha Holzmann
0 siblings, 1 reply; 8+ messages in thread
From: Imre Gergely @ 2005-08-23 13:33 UTC (permalink / raw)
To: LVM general discussion and development
anybody anything? i still have this problem... the main question is: can
one have multiple snapshots of the same LV, without problems? with one
snapshot e ok, but as soon as i create another snapshot, and try to do
something on the origin LV, it hangs, it oops-es.
Imre Gergely wrote:
> i've got a log entry for one of the oops-es:
>
> Unable to handle kernel paging request at virtual address 00100108
> printing eip:
> c025e142
> *pde = 00000000
> Oops: 0000 [#1]
> PREEMPT
> Modules linked in: pcnet32
> CPU: 0
> EIP: 0060:[<c025e142>] Not tainted VLI
> EFLAGS: 00010213 (2.6.12.5)
> EIP is at __origin_write+0x62/0x230
> eax: 001000e0 ebx: c59a9c60 ecx: c7c915c0 edx: ffff0001
> esi: 001000e0 edi: c568a7cc ebp: 00000000 esp: c7189cd0
> ds: 007b es: 007b ss: 0068
> Process kjournald (pid: 991, threadinfo=c7188000 task=c7c915c0)
> Stack: c70b6580 c7189d04 00000001 c7189cf4 00000000 c025dbc0 c7c5fe3c
> c48b4cec
> 00000000 c6285460 00001340 00000010 c61b6b40 c5f0c220 00002be0
> 00000010
> c1bca9e0 c037ae7c 00000001 c32e9aec c7189da4 c025e357 c568a7cc
> c1bca560
> Call Trace:
> [<c025dbc0>] copy_callback+0x0/0x50
> [<c025e357>] do_origin+0x47/0x70
> [<c0251449>] __map_bio+0x49/0x120
> [<c02518c1>] __clone_and_map+0x2a1/0x2b0
> [<c0127470>] autoremove_wake_function+0x0/0x60
> [<c0251978>] __split_bio+0xa8/0x130
> [<c0251a6e>] dm_request+0x6e/0xa0
> [<c0205c67>] generic_make_request+0x147/0x1e0
> [<c0127470>] autoremove_wake_function+0x0/0x60
> [<c0127470>] autoremove_wake_function+0x0/0x60
> [<c0127470>] autoremove_wake_function+0x0/0x60
> [<c0205d62>] submit_bio+0x62/0x100
> [<c0153c44>] bio_alloc_bioset+0xe4/0x1c0
> [<c0153d40>] bio_alloc+0x20/0x30
> [<c0153572>] submit_bh+0xd2/0x120
> [<c01c2371>] journal_commit_transaction+0xd51/0x1220
> [<c01048be>] do_IRQ+0x1e/0x30
> [<c02b51c7>] schedule+0x347/0x5e0
> [<c01c4ab6>] kjournald+0xd6/0x250
> [<c0127470>] autoremove_wake_function+0x0/0x60
> [<c0127470>] autoremove_wake_function+0x0/0x60
> [<c01029ae>] ret_from_fork+0x6/0x14
> [<c01c49c0>] commit_timeout+0x0/0x10
> [<c01c49e0>] kjournald+0x0/0x250
> [<c0100c31>] kernel_thread_helper+0x5/0x14
> Code: 85 d2 0f 85 46 01 00 00 8b 53 18 8d 42 e8 89 c3 8b 40 18 0f 18 00
> 90 39 fa 75 e2 8b 44 24 1c 85 c0 0f 84 fc 00 00 00 8b 74 24 1c <8b> 46
> 28 ba 01 00 ff ff 0f c1 10 85 d2 0f 85 2e 05 00 00 85 ed
>
> at this point the system still works but i cannot do reading/writing on
> that mounted partition. looks like ext3's journal has some problems with
> the snapshots?
>
> i did a restart, i removed, then recreated the two snapshots, did some
> reading/writing, then another oops popped up:
>
> Aug 16 10:32:21 fc2 kernel: Unable to handle kernel NULL pointer
> dereference at
> virtual address 00000000
> Aug 16 10:32:21 fc2 kernel: printing eip:
> Aug 16 10:32:21 fc2 kernel: c025e196
> Aug 16 10:32:21 fc2 kernel: *pde = 00000000
> Aug 16 10:32:21 fc2 kernel: Oops: 0002 [#1]
> Aug 16 10:32:21 fc2 kernel: PREEMPT
> Aug 16 10:32:21 fc2 kernel: Modules linked in: pcnet32
> Aug 16 10:32:21 fc2 kernel: CPU: 0
> Aug 16 10:32:21 fc2 kernel: EIP: 0060:[<c025e196>] Not tainted VLI
> Aug 16 10:32:21 fc2 kernel: EFLAGS: 00010246 (2.6.12.5)
> Aug 16 10:32:21 fc2 kernel: EIP is at __origin_write+0xb6/0x230
> Aug 16 10:32:21 fc2 kernel: eax: 00000000 ebx: c583c0e0 ecx:
> c7fc2560 edx:
> 0000ffff
> Aug 16 10:32:21 fc2 kernel: esi: c57a9b0c edi: c507498c ebp:
> 00000000 esp:
> c7fe5b14
> Aug 16 10:32:21 fc2 kernel: ds: 007b es: 007b ss: 0068
> Aug 16 10:32:21 fc2 kernel: Process pdflush (pid: 8, threadinfo=c7fe4000
> task=c7
> fc2560)
> Aug 16 10:32:21 fc2 kernel: Stack: c4c796a0 c7fe5b48 00000001 c7fe5b38
> 00000000
> c025dbc0 c57a9b0c c7c5f1dc
> Aug 16 10:32:21 fc2 kernel: 00000000 c5332dc0 000009e0 00000010
> c50ed330
> c7ee0520 000044e0 00000010
> Aug 16 10:32:21 fc2 kernel: c73e8620 c037ae7c 00000001 c51a4dbc
> c7fe5be8
> c025e357 c507498c c73e85c0
> Aug 16 10:32:21 fc2 kernel: Call Trace:
> Aug 16 10:32:22 fc2 kernel: [<c025dbc0>] copy_callback+0x0/0x50
> Aug 16 10:32:22 fc2 kernel: [<c025e357>] do_origin+0x47/0x70
> Aug 16 10:32:22 fc2 kernel: [<c0251449>] __map_bio+0x49/0x120
> Aug 16 10:32:22 fc2 kernel: [<c02518c1>] __clone_and_map+0x2a1/0x2b0
> Aug 16 10:32:22 fc2 kernel: [<c0127470>] autoremove_wake_function+0x0/0x60
> Aug 16 10:32:22 fc2 kernel: [<c0251978>] __split_bio+0xa8/0x130
> Aug 16 10:32:22 fc2 kernel: [<c0251a6e>] dm_request+0x6e/0xa0
> Aug 16 10:32:22 fc2 kernel: [<c0205c67>] generic_make_request+0x147/0x1e0
> Aug 16 10:32:22 fc2 kernel: [<c0127470>] autoremove_wake_function+0x0/0x60
> Aug 16 10:32:22 fc2 last message repeated 2 times
> Aug 16 10:32:22 fc2 kernel: [<c0205d62>] submit_bio+0x62/0x100
> Aug 16 10:32:22 fc2 kernel: [<c0153c44>] bio_alloc_bioset+0xe4/0x1c0
> Aug 16 10:32:22 fc2 kernel: [<c0153d40>] bio_alloc+0x20/0x30
> Aug 16 10:32:22 fc2 kernel: [<c0153572>] submit_bh+0xd2/0x120
> Aug 16 10:32:22 fc2 kernel: [<c0151c90>]
> __block_write_full_page+0x150/0x320
> Aug 16 10:32:22 fc2 kernel: [<c01533dd>] block_write_full_page+0xcd/0x100
> Aug 16 10:32:22 fc2 kernel: [<c01b2620>] ext3_get_block+0x0/0xa0
> Aug 16 10:32:22 fc2 kernel: [<c01b3292>] ext3_ordered_writepage+0xd2/0x1c0
> Aug 16 10:32:22 fc2 kernel: [<c01b2620>] ext3_get_block+0x0/0xa0
> Aug 16 10:32:22 fc2 kernel: [<c01b3180>] bget_one+0x0/0x10
> Aug 16 10:32:22 fc2 kernel: [<c0173842>] mpage_writepages+0x262/0x3e0
> Aug 16 10:32:22 fc2 kernel: [<c01b31c0>] ext3_ordered_writepage+0x0/0x1c0
> Aug 16 10:32:22 fc2 kernel: [<c013739d>] do_writepages+0x3d/0x50
> Aug 16 10:32:22 fc2 kernel: [<c0171d61>] __sync_single_inode+0x71/0x210
> Aug 16 10:32:22 fc2 kernel: [<c0171f67>]
> __writeback_single_inode+0x67/0x150
> Aug 16 10:32:22 fc2 kernel: [<c0251b70>] dm_any_congested+0x30/0x60
> Aug 16 10:32:22 fc2 kernel: [<c0253e8d>] dm_table_any_congested+0x5d/0x60
> Aug 16 10:32:22 fc2 kernel: [<c0251b70>] dm_any_congested+0x30/0x60
> Aug 16 10:32:22 fc2 kernel: [<c01721e7>] sync_sb_inodes+0x197/0x2a0
> Aug 16 10:32:22 fc2 kernel: [<c01723c4>] writeback_inodes+0xd4/0xf0
> Aug 16 10:32:22 fc2 kernel: [<c0137053>] background_writeout+0x73/0xc0
> Aug 16 10:32:22 fc2 kernel: [<c0137afb>] __pdflush+0xbb/0x1a0
> Aug 16 10:32:22 fc2 kernel: [<c0137be0>] pdflush+0x0/0x30
> Aug 16 10:32:22 fc2 kernel: [<c0137c08>] pdflush+0x28/0x30
> Aug 16 10:32:22 fc2 kernel: [<c0136fe0>] background_writeout+0x0/0xc0
> Aug 16 10:32:22 fc2 kernel: [<c0137be0>] pdflush+0x0/0x30
> Aug 16 10:32:22 fc2 kernel: [<c0126fa5>] kthread+0xa5/0xb0
> Aug 16 10:32:22 fc2 kernel: [<c0126f00>] kthread+0x0/0xb0
> Aug 16 10:32:22 fc2 kernel: [<c0100c31>] kernel_thread_helper+0x5/0x14
> Aug 16 10:32:22 fc2 kernel: Code: c0 0f 84 e7 00 00 00 89 48 04 8b 4c 24
> 5c 89 4
> a 04 8b 46 2c 85 c0 0f 85 bf 00 00 00 c7 46 2c 01 00 00 00 8b 46 28 ba
> ff ff 00
> 00 <0f> c1 10 0f 85 f0 04 00 00 8b 5e 28 8b 43 10 8b 48 10 8b 41 04
>
> after this the same thing... i cannot read nor write to the partition,
> and eventually have to reboot.
>
> (now there's a little detail i didn't mention... this "computer" i'm
> testing on is in a vmware installed fedora core 2. but i don't think
> that could be a problem. or could it?)
>
> Imre Gergely wrote:
>
>>mmm, i forgot:
>>
>>[root@fc2 root]# uname -r
>>2.6.12.5
>>
>>lvm> version
>> LVM version: 2.01.13-cvs (2005-06-14)
>> Library version: 1.01.01 (2005-03-29)
>> Driver version: 4.4.0
>>
>>
>>Imre Gergely wrote:
>>
>>
>>>hi
>>>
>>>i tried to create 2 snapshots of a LV, and i got in trouble ;) with one
>>>snapshot there are apparently no problems, with two the kernel gives me
>>>random oops-es and hangs. first, i tried with the root LV, then i tried
>>>with a simple partition, the result were the same. i created the two
>>>snapshots, after that i did a vgdisplay, there was everything alright, i
>>>mounted the origin LV, i copied some files, i watched the percentage
>>>increasing on the snapshots, then i did a 'ls -la' on the mounted
>>>directory and then the kernel said bye-bye... i don't see any kernel
>>>panic, just the usual register and memory dump, and the hang.
>>>
>>>any ideas? don't tell me i cannot have more than one snapshot on one
>>>given LV.
>>>
>>>thx.
>>>
>>>_______________________________________________
>>>linux-lvm mailing list
>>>linux-lvm@redhat.com
>>>https://www.redhat.com/mailman/listinfo/linux-lvm
>>>read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>>
>>
>>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] multiple snapshots
2005-08-23 13:33 ` Imre Gergely
@ 2005-08-23 17:13 ` Micha Holzmann
2005-08-24 13:48 ` Imre Gergely
0 siblings, 1 reply; 8+ messages in thread
From: Micha Holzmann @ 2005-08-23 17:13 UTC (permalink / raw)
To: linux-lvm
Imre Gergely wrote:
>
> anybody anything? i still have this problem... the main question is: can
> one have multiple snapshots of the same LV, without problems? with one
> snapshot e ok, but as soon as i create another snapshot, and try to do
> something on the origin LV, it hangs, it oops-es.
Same on my machines, if more than one snapshot the system freezes after
a little time totally. Even the shutdown command does not work. I can
login via ssh an can see as many shutdown processes in ps list as i
issued. At the moment i assume that snapshots are not working as
expected or announced in the doku. These are my observations.
Best regards,
Micha Holzmann
--
Programmers don't die, they just GOSUB without RETURN.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] multiple snapshots
2005-08-23 17:13 ` Micha Holzmann
@ 2005-08-24 13:48 ` Imre Gergely
2005-08-24 13:56 ` Ming Zhang
0 siblings, 1 reply; 8+ messages in thread
From: Imre Gergely @ 2005-08-24 13:48 UTC (permalink / raw)
To: LVM general discussion and development
Micha Holzmann wrote:
> Imre Gergely wrote:
>
>>anybody anything? i still have this problem... the main question is: can
>>one have multiple snapshots of the same LV, without problems? with one
>>snapshot e ok, but as soon as i create another snapshot, and try to do
>>something on the origin LV, it hangs, it oops-es.
>
>
> Same on my machines, if more than one snapshot the system freezes after
> a little time totally. Even the shutdown command does not work. I can
> login via ssh an can see as many shutdown processes in ps list as i
> issued. At the moment i assume that snapshots are not working as
> expected or announced in the doku. These are my observations.
i tried on a real machine, the same thing happened. the only thing
different is that i don't see any oopses, or any error messages
indicating that something is wrong...
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] multiple snapshots
2005-08-24 13:48 ` Imre Gergely
@ 2005-08-24 13:56 ` Ming Zhang
0 siblings, 0 replies; 8+ messages in thread
From: Ming Zhang @ 2005-08-24 13:56 UTC (permalink / raw)
To: LVM general discussion and development
lvm snapshot is known to be unstable so far. if u have interest to
search the archival, there are many discussion on this.
lvm people has plan to stabilize it but with a lower priority than other
tasks. so we can only wait and hope it can be done soon or we can
petition to get it faster. :P
ming
On Wed, 2005-08-24 at 16:48 +0300, Imre Gergely wrote:
> Micha Holzmann wrote:
> > Imre Gergely wrote:
> >
> >>anybody anything? i still have this problem... the main question is: can
> >>one have multiple snapshots of the same LV, without problems? with one
> >>snapshot e ok, but as soon as i create another snapshot, and try to do
> >>something on the origin LV, it hangs, it oops-es.
> >
> >
> > Same on my machines, if more than one snapshot the system freezes after
> > a little time totally. Even the shutdown command does not work. I can
> > login via ssh an can see as many shutdown processes in ps list as i
> > issued. At the moment i assume that snapshots are not working as
> > expected or announced in the doku. These are my observations.
>
> i tried on a real machine, the same thing happened. the only thing
> different is that i don't see any oopses, or any error messages
> indicating that something is wrong...
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2005-08-24 13:56 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-11-26 6:10 [linux-lvm] multiple snapshots Stefan Majer
-- strict thread matches above, loose matches on Subject: below --
2005-08-15 11:50 Imre Gergely
2005-08-16 6:37 ` Imre Gergely
2005-08-16 7:17 ` Imre Gergely
2005-08-23 13:33 ` Imre Gergely
2005-08-23 17:13 ` Micha Holzmann
2005-08-24 13:48 ` Imre Gergely
2005-08-24 13:56 ` Ming Zhang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).