public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Thomas Kuther <tom@kuther.net>
To: linux-btrfs@vger.kernel.org
Subject: Re: Issues with "no space left on device" maybe related to 3.13 and/or kvm disk image fragmentation
Date: Mon, 13 Jan 2014 00:05:25 +0100	[thread overview]
Message-ID: <52D31FB5.1040906@kuther.net> (raw)
In-Reply-To: <52D2F9FF.5060905@kuther.net>

I did some more digging, and I think I have two maybe unrelated issues here.

The "no space left on device" could be caused by the amount of metadata
used. I defragmented the KVM image and other parts, ran a "balance start
-dusage=5", and now it looks like

└» btrfs fi df /
Data, single: total=113.11GiB, used=88.83GiB
System, DUP: total=64.00MiB, used=24.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, DUP: total=3.00GiB, used=2.40GiB


The issue with copying/moving off the KVM image still remains. Using
"cp" or "mv" hangs. Interestingly, what did work was using "qemu-img
convert -O raw ..." so now I have a fresh backup at least. The VM works
just fine with the original image file. I really wonder what goes wrong
with cp and mv.


And I stumbled over a third issue with my raid5 array:
└» df -h|grep /mnt/btrfs
/dev/md0        5,5T    3,4T  2,1T   63% /mnt/btrfs
└» sudo btrfs fi df /mnt/btrfs/
Data, single: total=3.33TiB, used=3.33TiB
System, DUP: total=8.00MiB, used=388.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, DUP: total=56.12GiB, used=5.14GiB
Metadata, single: total=8.00MiB, used=0.00

The array has been grown quite a while ago using "btrfs filesystem
resize max", but "btrfs fi df" still shows the old data size. How could
that happen?

This is becomming a "collection of maybe unrelated BTRFS funny tales"
thread... still I'd be happy on suggestions regarding any of the issues.

Thanks,
Tom


Am 12.01.2014 21:24, schrieb Thomas Kuther:
> Hello,
> 
> I'm experiencing an interesting issue with the BTRFS filesystem on my
> SSD drive. It first occured some time after the upgrade to kernel
> 3.13-rc (-rc3 was my first 3.13-rc) but I'm not sure if it is related.
> 
> The obvious symptoms are that services on my system started crashing
> with "no space left on device" errors.
> 
> └» mount |grep "/mnt/ssd"
> /dev/sda2 on /mnt/ssd type btrfs
> (rw,noatime,compress=lzo,ssd,noacl,space_cache)
> 
> └» btrfs fi df /mnt/ssd
> Data, single: total=113.11GiB, used=90.02GiB
> System, DUP: total=64.00MiB, used=24.00KiB
> System, single: total=4.00MiB, used=0.00
> Metadata, DUP: total=3.00GiB, used=2.46GiB
> 
> 
> I use snapper on two subvolumes of that BTRFS volume (/ and /home) -
> each keeping 7 daily snapshots and up to 10 hourlys.
> 
> When I saw those errors I started to delete most of the older snapshots,
> and the issue went away instantly, but this couldn't be a solution nor a
> workaround.
> 
> I do though have a "usual suspect" on that BTRFS volume. A KVM disk
> image of a Win8 VM (I _need_ Adobe Lightroom)
> 
> » lsattr /mnt/ssd/kvm-images/
> ---------------C /mnt/ssd/kvm-images/Windows_8_Pro.qcow2
> 
> So the image has CoW disabled. Now comes the interesting part:
> I'm trying to copy off the image to my raid5 array (BTRFS ontop of a
> mdraid 5 - absolutely no issues with that one), but the cp process seems
> like it's stalled.
> 
> After one hour the size of the destination copy is still 0 bytes. iotop
> almost constantly show values like
> 
>  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN      IO    COMMAND
>  4636 be/4 tom        14.40 K/s    0.00 B/s  0.00 %  0.71 % cp
> /mnt/ssd/kvm-images/Windows_8_Pro.qcow2 .
> 
> It tries to read the file with some 14K/s and writes absolutely nothing.
> 
> Any idea what's going wrong here, or suggestions how to get that qcow
> file copied off? I do have a backup, but honestly that one is quite aged
> - so simply rm'ing it would be the very last thing I'd like to try.
> 
> Regards,
> Tom
> 
> PS: please reply-to-all, I'm not subscribed. Thanks.
> 


-- 
Thomas Kuther
Aindorferstr. 109
80689 München

Tel: 089-45249951
Mobil: 0160-8224418

tom@kuther.net

  reply	other threads:[~2014-01-12 23:05 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-12 20:24 Issues with "no space left on device" maybe related to 3.13 and/or kvm disk image fragmentation Thomas Kuther
2014-01-12 23:05 ` Thomas Kuther [this message]
2014-01-13  7:21   ` Duncan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52D31FB5.1040906@kuther.net \
    --to=tom@kuther.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox