linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
To: Wang Xiaoguang <wangxg.fnst@cn.fujitsu.com>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>
Subject: Re: BTRFS: space_info 4 has 18446742286429913088 free, is not full
Date: Wed, 28 Sep 2016 14:25:33 +0200	[thread overview]
Message-ID: <8dfc4782-7f11-64d1-1e86-7aeebbfc4f17@profihost.ag> (raw)
In-Reply-To: <57EBB329.9060009@cn.fujitsu.com>

Am 28.09.2016 um 14:10 schrieb Wang Xiaoguang:
> hello,
> 
> On 09/28/2016 08:02 PM, Stefan Priebe - Profihost AG wrote:
>> Hi Xiaoguang Wang,
>>
>> Am 28.09.2016 um 13:35 schrieb Wang Xiaoguang:
>>> hello,
>>>
>>> On 09/28/2016 07:15 PM, Stefan Priebe - Profihost AG wrote:
>>>> Dear list,
>>>>
>>>> is there any chance anybody wants to work with me on the following
>>>> issue?
>>> Though I'm also somewhat new to btrfs, but I'd like to.
>>>
>>>> BTRFS: space_info 4 has 18446742286429913088 free, is not full
>>>> BTRFS: space_info total=98247376896, used=77036814336, pinned=0,
>>>> reserved=0, may_use=1808490201088, readonly=0
>>>>
>>>> i get this nearly every day.
>>>>
>>>> Here are some msg collected from today and yesterday from different
>>>> servers:
>>>> | BTRFS: space_info 4 has 18446742182612910080 free, is not full |
>>>> | BTRFS: space_info 4 has 18446742254739439616 free, is not full |
>>>> | BTRFS: space_info 4 has 18446743980225085440 free, is not full |
>>>> | BTRFS: space_info 4 has 18446743619906420736 free, is not full |
>>>> | BTRFS: space_info 4 has 18446743647369576448 free, is not full |
>>>> | BTRFS: space_info 4 has 18446742286429913088 free, is not full
>>>>
>>>> What i tried so far without success:
>>>> - use vanilla 4.8-rc8 kernel
>>>> - use latest vanilla 4.4 kernel
>>>> - use latest 4.4 kernel + patches from holger hoffstaette
>>>> - use clear_cache,space_cache=v2
>>>> - use clear_cache,space_cache=v1
>>>>
>>>> But all tries result in ENOSPC after a short period of time doing
>>>> backups.
>>> I also met enospc error in 4.8-rc6 when doing big files create and
>>> delete tests,
>>> for my cases, I have written some patches to fix it.
>>> Would you please apply my patches to have a try:
>>> btrfs: try to satisfy metadata requests when every flush_space() returns
>>> btrfs: try to write enough delalloc bytes when reclaiming metadata space
>>> btrfs: make shrink_delalloc() try harder to reclaim metadata space
>>> You can find them in btrfs mail list.
>> those are already in the patchset from holger:
>>
>> So i have these in my testing patchset (latest 4.4 kernel + patches from
>> holger hoffstaette):
>>
>> btrfs-20160921-try-to-satisfy-metadata-requests-when-every-flush_space()-returns.patch
>>
>>
>> btrfs-20160921-try-to-write-enough-delalloc-bytes-when-reclaiming-metadata-space.patch
>>
>>
>> btrfs-20160922-make-shrink_delalloc()-try-harder-to-reclaim-metadata-space.patch
>>
> OK, I see.
> But given that you often run into enospc errors, can you work out a
> reproduce
> script according to you work load. That will give us great help.

I already tried that but it wasn't working. It seems i need a test
device with +20TB and i need creating file that big in the tests. But
that isn't easy. Currently i've no test hardware that big. May be i
should try that on a production server.

Stefan

> Reagrds,
> Xiaoguang Wang
> 
>>
>> Greets,
>> Stefan
>>
>>> Regards,
>>> Xiaoguang Wang
>>>> Greets,
>>>> Stefan
>>>> -- 
>>>> To unsubscribe from this list: send the line "unsubscribe
>>>> linux-btrfs" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>>
>>>
>>>
>>
> 
> 
> 

  reply	other threads:[~2016-09-28 12:25 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-28 11:15 BTRFS: space_info 4 has 18446742286429913088 free, is not full Stefan Priebe - Profihost AG
2016-09-28 11:35 ` Wang Xiaoguang
2016-09-28 12:02   ` Stefan Priebe - Profihost AG
2016-09-28 12:10     ` Wang Xiaoguang
2016-09-28 12:25       ` Stefan Priebe - Profihost AG [this message]
2016-09-29  6:49       ` Stefan Priebe - Profihost AG
2016-09-29  6:55         ` Wang Xiaoguang
2016-09-29  7:09           ` Stefan Priebe - Profihost AG
2016-09-29  7:13             ` Wang Xiaoguang
2016-09-29  7:27               ` Stefan Priebe - Profihost AG
2016-09-29 10:03                 ` Adam Borowski
2016-09-29 10:05                   ` Stefan Priebe - Profihost AG
2016-10-06  3:04                 ` Wang Xiaoguang
2016-10-06  7:32                   ` Stefan Priebe - Profihost AG
2016-10-06  7:35                   ` Stefan Priebe - Profihost AG
2016-10-07  7:03                   ` Stefan Priebe - Profihost AG
2016-10-07  7:17                     ` Wang Xiaoguang
2016-10-07  7:47                       ` Paul Jones
2016-10-07  7:48                         ` Paul Jones
2016-10-07  7:59                       ` Stefan Priebe - Profihost AG
2016-10-07  8:05                       ` Stefan Priebe - Profihost AG
2016-10-07  8:06                       ` Stefan Priebe - Profihost AG
2016-10-07  8:07                         ` Wang Xiaoguang
2016-10-07  8:16                           ` Stefan Priebe - Profihost AG
2016-10-07  8:19                             ` Wang Xiaoguang
2016-10-07  9:33                       ` Holger Hoffstätte
2016-10-08  5:56                         ` Stefan Priebe - Profihost AG
2016-10-08 20:49                         ` Stefan Priebe - Profihost AG
2016-10-08  6:05                   ` Stefan Priebe - Profihost AG
2016-10-10 20:06                   ` Stefan Priebe - Profihost AG
2016-10-11  3:16                     ` Wang Xiaoguang
2016-10-23 17:47                 ` Stefan Priebe - Profihost AG
2016-10-25 10:48                   ` Wang Xiaoguang
2016-09-28 12:47   ` Holger Hoffstätte
2016-09-28 13:06     ` Stefan Priebe - Profihost AG
2016-09-28 13:44       ` Holger Hoffstätte
2016-09-28 13:59         ` Stefan Priebe - Profihost AG

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8dfc4782-7f11-64d1-1e86-7aeebbfc4f17@profihost.ag \
    --to=s.priebe@profihost.ag \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wangxg.fnst@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).