* how can I copy files bigger than ~32 GB when using compress-force?
@ 2010-10-04 20:26 Tomasz Chmielewski
2010-10-04 20:37 ` Chris Mason
0 siblings, 1 reply; 8+ messages in thread
From: Tomasz Chmielewski @ 2010-10-04 20:26 UTC (permalink / raw)
To: linux-btrfs
I'm trying to copy a ~220 GB file from a ext4 to btrfs filesystem with
336 GB free space on it. System is running 2.6.36-rc6.
File size on a ext4 filesystem:
# ls -l srv1-backup.qcow2
-rw-r--r-- 1 root root 219564146688 2010-09-26 21:56 srv1-backup.qcow2
Unfortunately it fails when only about 32 GB is copied (with
compress-force mount option enabled).
Copying to btrfs mount point stops, no reads or writes from any
filesystem, after file copied to btrfs is around 32 GB:
$ ls -l /mnt/btrfs/srv1-backup.qcow2
-rw-r--r-- 1 root root 32328482816 2010-10-04 21:49
/mnt/btrfs/srv1-backup.qcow2
This is 100% reproducible, when it gets to ~32 GB (varies by +/- 2 GB),
cp can't be interrupted, killed and the only way to get this "fixed"
seem to be rebooting the machine.
_Sometimes_, cp quits after waiting for longer with "no space left".
To reproduce:
Let's create a btrfs filesystem:
# mkfs.btrfs /dev/sdb4
WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
fs created label (null) on /dev/sdb4
nodesize 4096 leafsize 4096 sectorsize 4096 size 335.72GB
Btrfs Btrfs v0.19
# mount -o noatime,compress-force /dev/sdb4 /mnt/btrfs/
# df -h /mnt/btrfs
Filesystem Size Used Avail Use% Mounted on
/dev/sdb4 336G 56K 336G 1% /mnt/btrfs
# cp -v srv1-backup.qcow2 /mnt/btrfs/
`srv1-backup.qcow2' -> `/mnt/btrfs/srv1-backup.qcow2'
It always hands for me after copying ~32 GB, or quits with "no space left".
--
Tomasz Chmielewski
http://wpkg.org
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: how can I copy files bigger than ~32 GB when using compress-force?
2010-10-04 20:26 how can I copy files bigger than ~32 GB when using compress-force? Tomasz Chmielewski
@ 2010-10-04 20:37 ` Chris Mason
0 siblings, 0 replies; 8+ messages in thread
From: Chris Mason @ 2010-10-04 20:37 UTC (permalink / raw)
To: Tomasz Chmielewski; +Cc: linux-btrfs
On Mon, Oct 04, 2010 at 10:26:26PM +0200, Tomasz Chmielewski wrote:
> I'm trying to copy a ~220 GB file from a ext4 to btrfs filesystem
> with 336 GB free space on it. System is running 2.6.36-rc6.
>
> File size on a ext4 filesystem:
>
> # ls -l srv1-backup.qcow2
> -rw-r--r-- 1 root root 219564146688 2010-09-26 21:56 srv1-backup.qcow2
>
>
> Unfortunately it fails when only about 32 GB is copied (with
> compress-force mount option enabled).
I'm assuming this works without compress-force? I can make a guess at
what is happening, the compression forces a relatively small extent
size, and this is making our worst case metadata reservations get upset.
Does it happen with any 32gb file that doesn't compress well?
-chris
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: how can I copy files bigger than ~32 GB when using compress-force?
@ 2010-10-04 21:42 Tomasz Chmielewski
2010-10-04 22:28 ` Chris Mason
0 siblings, 1 reply; 8+ messages in thread
From: Tomasz Chmielewski @ 2010-10-04 21:42 UTC (permalink / raw)
To: linux-btrfs
> I'm assuming this works without compress-force? I can make a guess at
> what is happening, the compression forces a relatively small extent
> size, and this is making our worst case metadata reservations get upset.
Yes, it works without compress-force.
Interesting is that cp or rsync sometimes just exit quite fast with "no
space left".
Sometimes, they just "hang" (waited up to about an hour) - file size
does not grow anymore, last modified time is not updated, iostat does
not show any bytes read/written, there are no btrfs or any other
processes taking too much CPU, cp/rsync is not in "D" state (although it
gets to "D" state and uses 100% CPU as I try to kill it).
Could it be we're hitting two different bugs here?
> Does it happen with any 32gb file that doesn't compress well?
The 220 GB qcow2 file was basically uncompressible (backuppc archive
full of bzip2-compressed files).
--
Tomasz Chmielewski
http://wpkg.org
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: how can I copy files bigger than ~32 GB when using compress-force?
2010-10-04 21:42 Tomasz Chmielewski
@ 2010-10-04 22:28 ` Chris Mason
2010-10-05 4:12 ` Chester
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Chris Mason @ 2010-10-04 22:28 UTC (permalink / raw)
To: Tomasz Chmielewski; +Cc: linux-btrfs
On Mon, Oct 04, 2010 at 11:42:12PM +0200, Tomasz Chmielewski wrote:
> >I'm assuming this works without compress-force? I can make a guess at
> >what is happening, the compression forces a relatively small extent
> >size, and this is making our worst case metadata reservations get upset.
>
> Yes, it works without compress-force.
>
> Interesting is that cp or rsync sometimes just exit quite fast with
> "no space left".
>
> Sometimes, they just "hang" (waited up to about an hour) - file size
> does not grow anymore, last modified time is not updated, iostat
Sorry is this hang/fast exit with or without compress-force.
> does not show any bytes read/written, there are no btrfs or any
> other processes taking too much CPU, cp/rsync is not in "D" state
> (although it gets to "D" state and uses 100% CPU as I try to kill
> it).
>
> Could it be we're hitting two different bugs here?
>
>
> >Does it happen with any 32gb file that doesn't compress well?
>
> The 220 GB qcow2 file was basically uncompressible (backuppc archive
> full of bzip2-compressed files).
Ok, I think I know what is happening here, they all lead to the same
chunk of code. I'll be able to reproduce this locally.
-chris
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: how can I copy files bigger than ~32 GB when using compress-force?
2010-10-04 22:28 ` Chris Mason
@ 2010-10-05 4:12 ` Chester
2010-10-05 6:16 ` Tomasz Chmielewski
2010-10-12 11:12 ` Tomasz Chmielewski
2 siblings, 0 replies; 8+ messages in thread
From: Chester @ 2010-10-05 4:12 UTC (permalink / raw)
To: Chris Mason, Tomasz Chmielewski, linux-btrfs
I once tried to copy a chroot onto a USB mount, mounted with the
'compress,noatime' option, and the same thing kept happening to me. I
started using slower methods to transfer the files over, and I no
longer encountered the hanging problem. I was not copying any big
files, but it was a chroot, so there were lots of small files.
On Mon, Oct 4, 2010 at 5:28 PM, Chris Mason <chris.mason@oracle.com> wr=
ote:
>
> On Mon, Oct 04, 2010 at 11:42:12PM +0200, Tomasz Chmielewski wrote:
> > >I'm assuming this works without compress-force? =A0I can make a gu=
ess at
> > >what is happening, the compression forces a relatively small exten=
t
> > >size, and this is making our worst case metadata reservations get =
upset.
> >
> > Yes, it works without compress-force.
> >
> > Interesting is that cp or rsync sometimes just exit quite fast with
> > "no space left".
> >
> > Sometimes, they just "hang" (waited up to about an hour) - file siz=
e
> > does not grow anymore, last modified time is not updated, iostat
>
> Sorry is this hang/fast exit with or without compress-force.
>
> > does not show any bytes read/written, there are no btrfs or any
> > other processes taking too much CPU, cp/rsync is not in "D" state
> > (although it gets to "D" state and uses 100% CPU as I try to kill
> > it).
> >
> > Could it be we're hitting two different bugs here?
> >
> >
> > >Does it happen with any 32gb file that doesn't compress well?
> >
> > The 220 GB qcow2 file was basically uncompressible (backuppc archiv=
e
> > full of bzip2-compressed files).
>
> Ok, I think I know what is happening here, they all lead to the same
> chunk of code. =A0I'll be able to reproduce this locally.
>
> -chris
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs=
" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: how can I copy files bigger than ~32 GB when using compress-force?
2010-10-04 22:28 ` Chris Mason
2010-10-05 4:12 ` Chester
@ 2010-10-05 6:16 ` Tomasz Chmielewski
2010-10-12 11:12 ` Tomasz Chmielewski
2 siblings, 0 replies; 8+ messages in thread
From: Tomasz Chmielewski @ 2010-10-05 6:16 UTC (permalink / raw)
To: Chris Mason, linux-btrfs
On 05.10.2010 00:28, Chris Mason wrote:
> On Mon, Oct 04, 2010 at 11:42:12PM +0200, Tomasz Chmielewski wrote:
>>> I'm assuming this works without compress-force? I can make a guess at
>>> what is happening, the compression forces a relatively small extent
>>> size, and this is making our worst case metadata reservations get upset.
>>
>> Yes, it works without compress-force.
>>
>> Interesting is that cp or rsync sometimes just exit quite fast with
>> "no space left".
>>
>> Sometimes, they just "hang" (waited up to about an hour) - file size
>> does not grow anymore, last modified time is not updated, iostat
>
> Sorry is this hang/fast exit with or without compress-force.
It only seems to happen with compress-force.
--
Tomasz Chmielewski
http://wpkg.org
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: how can I copy files bigger than ~32 GB when using compress-force?
2010-10-04 22:28 ` Chris Mason
2010-10-05 4:12 ` Chester
2010-10-05 6:16 ` Tomasz Chmielewski
@ 2010-10-12 11:12 ` Tomasz Chmielewski
2010-10-12 15:01 ` Tomasz Chmielewski
2 siblings, 1 reply; 8+ messages in thread
From: Tomasz Chmielewski @ 2010-10-12 11:12 UTC (permalink / raw)
To: Chris Mason, linux-btrfs
On 05.10.2010 00:28, Chris Mason wrote:
>>> Does it happen with any 32gb file that doesn't compress well?
>>
>> The 220 GB qcow2 file was basically uncompressible (backuppc archive
>> full of bzip2-compressed files).
>
> Ok, I think I know what is happening here, they all lead to the same
> chunk of code. I'll be able to reproduce this locally.
FYI, qemu/kvm doesn't seem to like its files located on btrfs mounted with compress-force.
I have a filesystem mounted with noatime,compress-force, where I created a 100GB sparse file.
There, I wanted to install a Linux distribution - however, the whole qemu-kvm process hanged with these entries being repeated over and over.
It's not possible to kill the qemu-kvm process (even with kill -9) etc.
[103678.068429] INFO: task qemu-kvm:18722 blocked for more than 120 seconds.
[103678.068434] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[103678.068439] qemu-kvm D 00000001062a02ba 0 18722 18607 0x00000000
[103678.068446] ffff88008d80bb38 0000000000000082 ffff880000000000 00000000000148c0
[103678.068453] ffff88008725db00 00000000000148c0 ffff88008d80bfd8 00000000000148c0
[103678.068459] ffff88008d80a000 00000000000148c0 ffff88008725dea8 00000000000148c0
[103678.068466] Call Trace:
[103678.068490] [<ffffffffa06a3ee5>] btrfs_start_ordered_extent+0x75/0xc0 [btrfs]
[103678.068499] [<ffffffff8107e5e0>] ? autoremove_wake_function+0x0/0x40
[103678.068517] [<ffffffffa06a44a1>] btrfs_wait_ordered_range+0xc1/0x160 [btrfs]
[103678.068535] [<ffffffffa0695922>] btrfs_file_aio_write+0x542/0x9a0 [btrfs]
[103678.068542] [<ffffffff8108f562>] ? futex_wait_queue_me+0xc2/0x100
[103678.068549] [<ffffffff81082462>] ? hrtimer_cancel+0x22/0x30
[103678.068568] [<ffffffffa06953e0>] ? btrfs_file_aio_write+0x0/0x9a0 [btrfs]
[103678.068576] [<ffffffff81144533>] do_sync_readv_writev+0xd3/0x110
[103678.068583] [<ffffffff81091db9>] ? do_futex+0x109/0xb00
[103678.068590] [<ffffffff811d9789>] ? security_file_permission+0x29/0xa0
[103678.068597] [<ffffffff811447f4>] do_readv_writev+0xd4/0x1f0
[103678.068603] [<ffffffff810714ef>] ? kill_pid_info+0x3f/0x60
[103678.068610] [<ffffffff81144958>] vfs_writev+0x48/0x60
[103678.068615] [<ffffffff81144c92>] sys_pwritev+0xa2/0xc0
[103678.068623] [<ffffffff8100af82>] system_call_fastpath+0x16/0x1b
--
Tomasz Chmielewski
http://wpkg.org
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: how can I copy files bigger than ~32 GB when using compress-force?
2010-10-12 11:12 ` Tomasz Chmielewski
@ 2010-10-12 15:01 ` Tomasz Chmielewski
0 siblings, 0 replies; 8+ messages in thread
From: Tomasz Chmielewski @ 2010-10-12 15:01 UTC (permalink / raw)
To: Chris Mason, linux-btrfs
On 12.10.2010 13:12, Tomasz Chmielewski wrote:
> On 05.10.2010 00:28, Chris Mason wrote:
>
>>>> Does it happen with any 32gb file that doesn't compress well?
>>>
>>> The 220 GB qcow2 file was basically uncompressible (backuppc archive
>>> full of bzip2-compressed files).
>>
>> Ok, I think I know what is happening here, they all lead to the same
>> chunk of code. I'll be able to reproduce this locally.
>
> FYI, qemu/kvm doesn't seem to like its files located on btrfs mounted with compress-force.
>
> I have a filesystem mounted with noatime,compress-force, where I created a 100GB sparse file.
> There, I wanted to install a Linux distribution - however, the whole qemu-kvm process hanged with these entries being repeated over and over.
> It's not possible to kill the qemu-kvm process (even with kill -9) etc.
>
>
> [103678.068429] INFO: task qemu-kvm:18722 blocked for more than 120 seconds.
Hmm, I see it blocks infinitely like this whether qemu-kvm tries to
access a sparse file mounted with compress, compress-force, or no
compression at all.
Also hangs with non-sparse files mounted without compression (didn't try
with compression).
So must be some other bug.
And, I see occasional:
bad ordered accounting left 4096 size 12288
when it happens.
--
Tomasz Chmielewski
http://wpkg.org
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2010-10-12 15:01 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-10-04 20:26 how can I copy files bigger than ~32 GB when using compress-force? Tomasz Chmielewski
2010-10-04 20:37 ` Chris Mason
-- strict thread matches above, loose matches on Subject: below --
2010-10-04 21:42 Tomasz Chmielewski
2010-10-04 22:28 ` Chris Mason
2010-10-05 4:12 ` Chester
2010-10-05 6:16 ` Tomasz Chmielewski
2010-10-12 11:12 ` Tomasz Chmielewski
2010-10-12 15:01 ` Tomasz Chmielewski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).