On Sun, 2013-12-15 at 03:35 +0100, Hans-Kristian Bakke wrote: > I have done some more testing. I turned off everything using the disk > and only did defrag. I have created a script that gives me a list of > the files with the most extents. I started from the top to improve the > fragmentation of the worst files. The most fragmentet file was a file > of about 32GB with over 250 000 extents! > It seems that I can defrag a two to three largish (15-30GB) ~100 000 > extents files just fine, but after a while the system locks up (not a > complete hard lock, but everythings hangs and a restart is necessary > to get a fully working system again) > > It seems like defrag operations is triggering the issue. Probably in > combination with the large and heavily fragmentet files. > I'm trying to understand how defrag factors into your backup workload? Do you have autodefrag on, or are you running a defrag as part of the backup when you see these stalls? If not, we're seeing a different problem. -chris {.n++%ݶw{.n+{k~^nrzh&zzޗ++zfh~iz_j:+v)ߣm