On 2015-10-29 09:03, cheater00 . wrote: > Hi Liu, > after talking with Holger I believe turning off COW on this FS will > work to alleviate this issue. However, even with COW on, btrfs > shouldn't be making my computer freeze every 5 seconds... especially > while the disk is written to at mere tens of kilobytes per second. > It's not even the disk holding the system. I consider this a pretty > bad bug... should we go on with trying to reproduce a minimum case? > How would I go about this? Well, COW can cause some pretty unexpected behavior for some use cases. If you have a big disk (I think I remember you saying it was larger than 1TB), then COW can cause some pretty significant seek times because of how it works. With the current state of BTRFS, I wouldn't personally consider running BTRFS on anything bigger than 256G with a non-zero seek time with COW turned on, because large rewrites would have the potential to cause horrifically long seek times just for a RMW cycle on a single block, and this is in turn part of why database files and virtual-machine images tend to be pathological use cases for BTRFS. I do agree that this kind of thing is a bug, but it's not something that causes data corruption, which means that it is slightly lower priority as far as most people are concerned. Reproducing it might be tricky also, because I'd be willing to bet that things get better to the point of it being almost unnoticeable with an internal disk (USB is horrible when it comes to block storage performance, and has all kinds of potential reliability issues). Normally, when I try to go about reproducing something like this, I use a virtual machine running the most recent stable version of the Linux kernel, usually with a minimalistic Gentoo installation (although a clean install of pretty much any distro works fine). There are a couple of reasons I use such a setup: 1. Using a clean install provides a well defined initial state, making it easier for other people to reproduce any results. 2. Using the most recent stable kernel available (usually) eliminates the chances of old bugs causing issues. 3. Using a VM means that your disk access will be slower, which will visibly accentuate any kind of performance issues. 4. Using a VM also means that it is very easy to safely generate crash dumps and simulate data corruption for testing purposes, and makes it easier to experiment with different parameters (for example, UP versus SMP, or different amounts of RAM). If you do decide to go this route, my suggestion would be to use VirtualBox unless you have significant experience with some other hypervisor, as it's one of the easiest to learn to use (I usually use Xen or QEMU, but both require significant effort to set up initially, and are decidedly non-trivial to learn), and learning to debug stuff like this is itself not an easy task.