linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* how can I copy files bigger than ~32 GB when using compress-force?
@ 2010-10-04 20:26 Tomasz Chmielewski
  2010-10-04 20:37 ` Chris Mason
  0 siblings, 1 reply; 8+ messages in thread
From: Tomasz Chmielewski @ 2010-10-04 20:26 UTC (permalink / raw)
  To: linux-btrfs

I'm trying to copy a ~220 GB file from a ext4 to btrfs filesystem with 
336 GB free space on it. System is running 2.6.36-rc6.

File size on a ext4 filesystem:

# ls -l srv1-backup.qcow2
-rw-r--r-- 1 root root 219564146688 2010-09-26 21:56 srv1-backup.qcow2


Unfortunately it fails when only about 32 GB is copied (with 
compress-force mount option enabled).

Copying to btrfs mount point stops, no reads or writes from any 
filesystem, after file copied to btrfs is around 32 GB:

$ ls -l /mnt/btrfs/srv1-backup.qcow2
-rw-r--r-- 1 root root 32328482816 2010-10-04 21:49 
/mnt/btrfs/srv1-backup.qcow2

This is 100% reproducible, when it gets to ~32 GB (varies by +/- 2 GB), 
cp can't be interrupted, killed and the only way to get this "fixed" 
seem to be rebooting the machine.

_Sometimes_, cp quits after waiting for longer with "no space left".


To reproduce:

Let's create a btrfs filesystem:

#  mkfs.btrfs /dev/sdb4

WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label (null) on /dev/sdb4
         nodesize 4096 leafsize 4096 sectorsize 4096 size 335.72GB
Btrfs Btrfs v0.19


# mount -o noatime,compress-force /dev/sdb4 /mnt/btrfs/

# df -h /mnt/btrfs
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb4             336G   56K  336G   1% /mnt/btrfs

# cp -v srv1-backup.qcow2 /mnt/btrfs/
`srv1-backup.qcow2' -> `/mnt/btrfs/srv1-backup.qcow2'


It always hands for me after copying ~32 GB, or quits with "no space left".


-- 
Tomasz Chmielewski
http://wpkg.org

^ permalink raw reply	[flat|nested] 8+ messages in thread
* Re: how can I copy files bigger than ~32 GB when using compress-force?
@ 2010-10-04 21:42 Tomasz Chmielewski
  2010-10-04 22:28 ` Chris Mason
  0 siblings, 1 reply; 8+ messages in thread
From: Tomasz Chmielewski @ 2010-10-04 21:42 UTC (permalink / raw)
  To: linux-btrfs

> I'm assuming this works without compress-force?  I can make a guess at
> what is happening, the compression forces a relatively small extent
> size, and this is making our worst case metadata reservations get upset.

Yes, it works without compress-force.

Interesting is that cp or rsync sometimes just exit quite fast with "no 
space left".

Sometimes, they just "hang" (waited up to about an hour) - file size 
does not grow anymore, last modified time is not updated, iostat does 
not show any bytes read/written, there are no btrfs or any other 
processes taking too much CPU, cp/rsync is not in "D" state (although it 
gets to "D" state and uses 100% CPU as I try to kill it).

Could it be we're hitting two different bugs here?


> Does it happen with any 32gb file that doesn't compress well?

The 220 GB qcow2 file was basically uncompressible (backuppc archive 
full of bzip2-compressed files).


-- 
Tomasz Chmielewski
http://wpkg.org


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2010-10-12 15:01 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-10-04 20:26 how can I copy files bigger than ~32 GB when using compress-force? Tomasz Chmielewski
2010-10-04 20:37 ` Chris Mason
  -- strict thread matches above, loose matches on Subject: below --
2010-10-04 21:42 Tomasz Chmielewski
2010-10-04 22:28 ` Chris Mason
2010-10-05  4:12   ` Chester
2010-10-05  6:16   ` Tomasz Chmielewski
2010-10-12 11:12   ` Tomasz Chmielewski
2010-10-12 15:01     ` Tomasz Chmielewski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).