* [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk @ 2017-06-22 13:51 jeffm 2017-06-22 13:51 ` [PATCH v2 2/2] btrfs: Simplify math in should_alloc chunk jeffm ` (2 more replies) 0 siblings, 3 replies; 8+ messages in thread From: jeffm @ 2017-06-22 13:51 UTC (permalink / raw) To: linux-btrfs; +Cc: Jeff Mahoney From: Jeff Mahoney <jeffm@suse.com> In a heavy write scenario, we can end up with a large number of pinned bytes. This can translate into (very) premature ENOSPC because pinned bytes must be accounted for when allowing a reservation but aren't accounted for when deciding whether to create a new chunk. This patch adds the accounting to should_alloc_chunk so that we can create the chunk. Signed-off-by: Jeff Mahoney <jeffm@suse.com> --- fs/btrfs/extent-tree.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 33d979e9ea2a..88b04742beea 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -4377,7 +4377,7 @@ static int should_alloc_chunk(struct btrfs_fs_info *fs_info, { struct btrfs_block_rsv *global_rsv = &fs_info->global_block_rsv; u64 num_bytes = sinfo->total_bytes - sinfo->bytes_readonly; - u64 num_allocated = sinfo->bytes_used + sinfo->bytes_reserved; + u64 num_allocated = sinfo->bytes_used + sinfo->bytes_reserved + sinfo->bytes_pinned; u64 thresh; if (force == CHUNK_ALLOC_FORCE) -- 2.11.0 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 2/2] btrfs: Simplify math in should_alloc chunk 2017-06-22 13:51 [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk jeffm @ 2017-06-22 13:51 ` jeffm 2017-06-29 19:21 ` [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk Omar Sandoval 2017-07-10 17:23 ` David Sterba 2 siblings, 0 replies; 8+ messages in thread From: jeffm @ 2017-06-22 13:51 UTC (permalink / raw) To: linux-btrfs; +Cc: Nikolay Borisov, Jeff Mahoney From: Nikolay Borisov <nborisov@suse.com> Currently should_alloc_chunk uses ->total_bytes - ->bytes_readonly to signify the total amount of bytes in this space info. However, given Jeff's patch which adds bytes_pinned and bytes_may_use to the calculation of num_allocated it becomes a lot more clear to just eliminate num_bytes altogether and add the bytes_readonly to the amount of used space. That way we don't change the results of the following statements. In the process also start using btrfs_space_info_used. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Jeff Mahoney <jeffm@suse.com> --- fs/btrfs/extent-tree.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 88b04742beea..b8293062ac8e 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -4376,8 +4376,7 @@ static int should_alloc_chunk(struct btrfs_fs_info *fs_info, struct btrfs_space_info *sinfo, int force) { struct btrfs_block_rsv *global_rsv = &fs_info->global_block_rsv; - u64 num_bytes = sinfo->total_bytes - sinfo->bytes_readonly; - u64 num_allocated = sinfo->bytes_used + sinfo->bytes_reserved + sinfo->bytes_pinned; + u64 bytes_used = btrfs_space_info_used(sinfo, false); u64 thresh; if (force == CHUNK_ALLOC_FORCE) @@ -4389,7 +4388,7 @@ static int should_alloc_chunk(struct btrfs_fs_info *fs_info, * global_rsv, it doesn't change except when the transaction commits. */ if (sinfo->flags & BTRFS_BLOCK_GROUP_METADATA) - num_allocated += calc_global_rsv_need_space(global_rsv); + bytes_used += calc_global_rsv_need_space(global_rsv); /* * in limited mode, we want to have some free space up to @@ -4399,11 +4398,11 @@ static int should_alloc_chunk(struct btrfs_fs_info *fs_info, thresh = btrfs_super_total_bytes(fs_info->super_copy); thresh = max_t(u64, SZ_64M, div_factor_fine(thresh, 1)); - if (num_bytes - num_allocated < thresh) + if (sinfo->total_bytes - bytes_used < thresh) return 1; } - if (num_allocated + SZ_2M < div_factor(num_bytes, 8)) + if (bytes_used + SZ_2M < div_factor(sinfo->total_bytes, 8)) return 0; return 1; } -- 2.11.0 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk 2017-06-22 13:51 [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk jeffm 2017-06-22 13:51 ` [PATCH v2 2/2] btrfs: Simplify math in should_alloc chunk jeffm @ 2017-06-29 19:21 ` Omar Sandoval 2017-06-29 19:49 ` Jeff Mahoney 2017-07-10 17:23 ` David Sterba 2 siblings, 1 reply; 8+ messages in thread From: Omar Sandoval @ 2017-06-29 19:21 UTC (permalink / raw) To: jeffm; +Cc: linux-btrfs On Thu, Jun 22, 2017 at 09:51:47AM -0400, jeffm@suse.com wrote: > From: Jeff Mahoney <jeffm@suse.com> > > In a heavy write scenario, we can end up with a large number of pinned bytes. > This can translate into (very) premature ENOSPC because pinned bytes > must be accounted for when allowing a reservation but aren't accounted for > when deciding whether to create a new chunk. > > This patch adds the accounting to should_alloc_chunk so that we can > create the chunk. Hey, Jeff, Does this fix your ENOSPC problem on a fresh filesystem? I just tracked down an ENOSPC issue someone here reported when doing a btrfs send to a fresh filesystem and it sounds a lot like your issue: metadata bytes_may_use shoots up but we don't allocate any chunks for it. I'm not seeing how including bytes_pinned will help for this case. We won't have any pinned bytes when populating a new fs, right? I don't have a good solution. Allocating chunks based on bytes_may_use is going to way over-allocate because of our worst-case estimations. I'm double-checking now that the flusher is doing the right thing and not missing anything. I'll keep digging, just wanted to know if you had any thoughts. > Signed-off-by: Jeff Mahoney <jeffm@suse.com> > --- > fs/btrfs/extent-tree.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c > index 33d979e9ea2a..88b04742beea 100644 > --- a/fs/btrfs/extent-tree.c > +++ b/fs/btrfs/extent-tree.c > @@ -4377,7 +4377,7 @@ static int should_alloc_chunk(struct btrfs_fs_info *fs_info, > { > struct btrfs_block_rsv *global_rsv = &fs_info->global_block_rsv; > u64 num_bytes = sinfo->total_bytes - sinfo->bytes_readonly; > - u64 num_allocated = sinfo->bytes_used + sinfo->bytes_reserved; > + u64 num_allocated = sinfo->bytes_used + sinfo->bytes_reserved + sinfo->bytes_pinned; > u64 thresh; > > if (force == CHUNK_ALLOC_FORCE) > -- > 2.11.0 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk 2017-06-29 19:21 ` [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk Omar Sandoval @ 2017-06-29 19:49 ` Jeff Mahoney 2017-06-29 20:01 ` Omar Sandoval ` (2 more replies) 0 siblings, 3 replies; 8+ messages in thread From: Jeff Mahoney @ 2017-06-29 19:49 UTC (permalink / raw) To: Omar Sandoval; +Cc: linux-btrfs, Nikolay Borisov [-- Attachment #1.1: Type: text/plain, Size: 4154 bytes --] On 6/29/17 3:21 PM, Omar Sandoval wrote: > On Thu, Jun 22, 2017 at 09:51:47AM -0400, jeffm@suse.com wrote: >> From: Jeff Mahoney <jeffm@suse.com> >> >> In a heavy write scenario, we can end up with a large number of pinned bytes. >> This can translate into (very) premature ENOSPC because pinned bytes >> must be accounted for when allowing a reservation but aren't accounted for >> when deciding whether to create a new chunk. >> >> This patch adds the accounting to should_alloc_chunk so that we can >> create the chunk. > > Hey, Jeff, Hi Omar - > Does this fix your ENOSPC problem on a fresh filesystem? I just tracked No, it didn't. It helped somewhat, but we were still hitting it frequently. What did help was reverting "Btrfs: skip commit transaction if we don't have enough pinned bytes" (not upstream yet, on the list). > down an ENOSPC issue someone here reported when doing a btrfs send to a > fresh filesystem and it sounds a lot like your issue: metadata > bytes_may_use shoots up but we don't allocate any chunks for it. I'm not > seeing how including bytes_pinned will help for this case. We won't have > any pinned bytes when populating a new fs, right? Our test environment is just installing the OS. That means lots of creates, writes, and then renames, so there's a fair amount of metadata churn that results in elevated pinned_bytes. Rsync can cause the same workload pretty easily too. Nikolay was going to look into coming up with a configuration for fsstress that would emulate it. > I don't have a good solution. Allocating chunks based on bytes_may_use > is going to way over-allocate because of our worst-case estimations. I'm > double-checking now that the flusher is doing the right thing and not > missing anything. I'll keep digging, just wanted to know if you had any > thoughts. My suspicion is that it all just happens to work and that there are several bugs working together that approximate a correct result. My reasoning is that the patch I referenced above is correct. The logic in may_commit_transaction is inverted and causing a ton of additional transaction commits. I think that having the additional transaction commits is serving to free pinned bytes more quickly so things just work for the most part and pinned bytes doesn't play as much of a role. But once the transaction count comes down, that pinned bytes count gets elevated and becomes more important. I think it should be taken into account to determine whether committing a transaction early will result in releasing enough space to honor the reservation without allocating a new chunk. If the answer is yes, flush it. If no, there's no point in flushing it now, so just allocate the chunk and move on. The big question is where this 80% number comes into play. There is a caveat here: almost all of our testing has been on 4.4 with a bunch of these patches backported. I believe the same issue will still be there on mainline, but we're in release crunch mode and I haven't had a chance to test more fully. -Jeff >> Signed-off-by: Jeff Mahoney <jeffm@suse.com> >> --- >> fs/btrfs/extent-tree.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c >> index 33d979e9ea2a..88b04742beea 100644 >> --- a/fs/btrfs/extent-tree.c >> +++ b/fs/btrfs/extent-tree.c >> @@ -4377,7 +4377,7 @@ static int should_alloc_chunk(struct btrfs_fs_info *fs_info, >> { >> struct btrfs_block_rsv *global_rsv = &fs_info->global_block_rsv; >> u64 num_bytes = sinfo->total_bytes - sinfo->bytes_readonly; >> - u64 num_allocated = sinfo->bytes_used + sinfo->bytes_reserved; >> + u64 num_allocated = sinfo->bytes_used + sinfo->bytes_reserved + sinfo->bytes_pinned; >> u64 thresh; >> >> if (force == CHUNK_ALLOC_FORCE) >> -- >> 2.11.0 >> >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html > -- Jeff Mahoney SUSE Labs [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 819 bytes --] ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk 2017-06-29 19:49 ` Jeff Mahoney @ 2017-06-29 20:01 ` Omar Sandoval 2017-06-29 20:25 ` Nikolay Borisov 2017-06-29 22:09 ` Omar Sandoval 2 siblings, 0 replies; 8+ messages in thread From: Omar Sandoval @ 2017-06-29 20:01 UTC (permalink / raw) To: Jeff Mahoney; +Cc: linux-btrfs, Nikolay Borisov On Thu, Jun 29, 2017 at 03:49:05PM -0400, Jeff Mahoney wrote: > On 6/29/17 3:21 PM, Omar Sandoval wrote: > > On Thu, Jun 22, 2017 at 09:51:47AM -0400, jeffm@suse.com wrote: > >> From: Jeff Mahoney <jeffm@suse.com> > >> > >> In a heavy write scenario, we can end up with a large number of pinned bytes. > >> This can translate into (very) premature ENOSPC because pinned bytes > >> must be accounted for when allowing a reservation but aren't accounted for > >> when deciding whether to create a new chunk. > >> > >> This patch adds the accounting to should_alloc_chunk so that we can > >> create the chunk. > > > > Hey, Jeff, > > Hi Omar - > > > Does this fix your ENOSPC problem on a fresh filesystem? I just tracked > > No, it didn't. It helped somewhat, but we were still hitting it > frequently. What did help was reverting "Btrfs: skip commit transaction > if we don't have enough pinned bytes" (not upstream yet, on the list). I see, that makes sense. > > down an ENOSPC issue someone here reported when doing a btrfs send to a > > fresh filesystem and it sounds a lot like your issue: metadata > > bytes_may_use shoots up but we don't allocate any chunks for it. I'm not > > seeing how including bytes_pinned will help for this case. We won't have > > any pinned bytes when populating a new fs, right? > > Our test environment is just installing the OS. That means lots of > creates, writes, and then renames, so there's a fair amount of metadata > churn that results in elevated pinned_bytes. Rsync can cause the same > workload pretty easily too. Nikolay was going to look into coming up > with a configuration for fsstress that would emulate it. The reproducer I have is a ~1.7GB btrfs receive onto a brand new 3GB filesystem. In my case, nothing (or very little) was getting pinned, but it makes sense that it's different for your case. > > I don't have a good solution. Allocating chunks based on bytes_may_use > > is going to way over-allocate because of our worst-case estimations. I'm > > double-checking now that the flusher is doing the right thing and not > > missing anything. I'll keep digging, just wanted to know if you had any > > thoughts. > > My suspicion is that it all just happens to work and that there are > several bugs working together that approximate a correct result. It certainly feels that way :) > My > reasoning is that the patch I referenced above is correct. The logic in > may_commit_transaction is inverted and causing a ton of additional > transaction commits. I think that having the additional transaction > commits is serving to free pinned bytes more quickly so things just work > for the most part and pinned bytes doesn't play as much of a role. But > once the transaction count comes down, that pinned bytes count gets > elevated and becomes more important. I think it should be taken into > account to determine whether committing a transaction early will result > in releasing enough space to honor the reservation without allocating a > new chunk. If the answer is yes, flush it. If no, there's no point in > flushing it now, so just allocate the chunk and move on. > > The big question is where this 80% number comes into play. > > There is a caveat here: almost all of our testing has been on 4.4 with a > bunch of these patches backported. I believe the same issue will still > be there on mainline, but we're in release crunch mode and I haven't had > a chance to test more fully. What's weird is that my reproducer hits this very frequently (>50% of the time) on our internal kernel build, which is 4.6 + backports up to 4.12, but upstream 4.12-rc7 hits it much less frequently (~5% of the time). Anyways, this is all getting messy in my head, so I'm just going to go head down on this for a little while and see what I can come up with. Thanks for the reply! > -Jeff > > >> Signed-off-by: Jeff Mahoney <jeffm@suse.com> > >> --- > >> fs/btrfs/extent-tree.c | 2 +- > >> 1 file changed, 1 insertion(+), 1 deletion(-) > >> > >> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c > >> index 33d979e9ea2a..88b04742beea 100644 > >> --- a/fs/btrfs/extent-tree.c > >> +++ b/fs/btrfs/extent-tree.c > >> @@ -4377,7 +4377,7 @@ static int should_alloc_chunk(struct btrfs_fs_info *fs_info, > >> { > >> struct btrfs_block_rsv *global_rsv = &fs_info->global_block_rsv; > >> u64 num_bytes = sinfo->total_bytes - sinfo->bytes_readonly; > >> - u64 num_allocated = sinfo->bytes_used + sinfo->bytes_reserved; > >> + u64 num_allocated = sinfo->bytes_used + sinfo->bytes_reserved + sinfo->bytes_pinned; > >> u64 thresh; > >> > >> if (force == CHUNK_ALLOC_FORCE) > >> -- > >> 2.11.0 > >> > >> -- > >> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > >> the body of a message to majordomo@vger.kernel.org > >> More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > > -- > Jeff Mahoney > SUSE Labs > ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk 2017-06-29 19:49 ` Jeff Mahoney 2017-06-29 20:01 ` Omar Sandoval @ 2017-06-29 20:25 ` Nikolay Borisov 2017-06-29 22:09 ` Omar Sandoval 2 siblings, 0 replies; 8+ messages in thread From: Nikolay Borisov @ 2017-06-29 20:25 UTC (permalink / raw) To: Jeff Mahoney, Omar Sandoval; +Cc: linux-btrfs On 29.06.2017 22:49, Jeff Mahoney wrote: > Our test environment is just installing the OS. That means lots of > creates, writes, and then renames, so there's a fair amount of metadata > churn that results in elevated pinned_bytes. Rsync can cause the same > workload pretty easily too. Nikolay was going to look into coming up > with a configuration for fsstress that would emulate it. I did experiment with fsstress -f rename=65 -f write=35 but this thing just exhausted the filesystem completely, no premature ENOSPC. I also tried doing just rename on a fs which has around 1gb free space and again the usage was steadily incrased but no enospc was observed ;\ ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk 2017-06-29 19:49 ` Jeff Mahoney 2017-06-29 20:01 ` Omar Sandoval 2017-06-29 20:25 ` Nikolay Borisov @ 2017-06-29 22:09 ` Omar Sandoval 2 siblings, 0 replies; 8+ messages in thread From: Omar Sandoval @ 2017-06-29 22:09 UTC (permalink / raw) To: Jeff Mahoney; +Cc: linux-btrfs, Nikolay Borisov On Thu, Jun 29, 2017 at 03:49:05PM -0400, Jeff Mahoney wrote: > On 6/29/17 3:21 PM, Omar Sandoval wrote: > > On Thu, Jun 22, 2017 at 09:51:47AM -0400, jeffm@suse.com wrote: > >> From: Jeff Mahoney <jeffm@suse.com> > >> > >> In a heavy write scenario, we can end up with a large number of pinned bytes. > >> This can translate into (very) premature ENOSPC because pinned bytes > >> must be accounted for when allowing a reservation but aren't accounted for > >> when deciding whether to create a new chunk. > >> > >> This patch adds the accounting to should_alloc_chunk so that we can > >> create the chunk. > > > > Hey, Jeff, > > Hi Omar - > > > Does this fix your ENOSPC problem on a fresh filesystem? I just tracked > > No, it didn't. It helped somewhat, but we were still hitting it > frequently. What did help was reverting "Btrfs: skip commit transaction > if we don't have enough pinned bytes" (not upstream yet, on the list). > > > down an ENOSPC issue someone here reported when doing a btrfs send to a > > fresh filesystem and it sounds a lot like your issue: metadata > > bytes_may_use shoots up but we don't allocate any chunks for it. I'm not > > seeing how including bytes_pinned will help for this case. We won't have > > any pinned bytes when populating a new fs, right? > > Our test environment is just installing the OS. That means lots of > creates, writes, and then renames, so there's a fair amount of metadata > churn that results in elevated pinned_bytes. Rsync can cause the same > workload pretty easily too. Nikolay was going to look into coming up > with a configuration for fsstress that would emulate it. > > > I don't have a good solution. Allocating chunks based on bytes_may_use > > is going to way over-allocate because of our worst-case estimations. I'm > > double-checking now that the flusher is doing the right thing and not > > missing anything. I'll keep digging, just wanted to know if you had any > > thoughts. > > My suspicion is that it all just happens to work and that there are > several bugs working together that approximate a correct result. My > reasoning is that the patch I referenced above is correct. The logic in > may_commit_transaction is inverted and causing a ton of additional > transaction commits. I think that having the additional transaction > commits is serving to free pinned bytes more quickly so things just work > for the most part and pinned bytes doesn't play as much of a role. But > once the transaction count comes down, that pinned bytes count gets > elevated and becomes more important. I think it should be taken into > account to determine whether committing a transaction early will result > in releasing enough space to honor the reservation without allocating a > new chunk. If the answer is yes, flush it. If no, there's no point in > flushing it now, so just allocate the chunk and move on. > > The big question is where this 80% number comes into play. > > There is a caveat here: almost all of our testing has been on 4.4 with a > bunch of these patches backported. I believe the same issue will still > be there on mainline, but we're in release crunch mode and I haven't had > a chance to test more fully. > > -Jeff Jeff, can you try this and see if it helps? diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 33d979e9ea2a..83eecd33ad96 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -4776,10 +4776,6 @@ static void shrink_delalloc(struct btrfs_root *root, u64 to_reclaim, u64 orig, else flush = BTRFS_RESERVE_NO_FLUSH; spin_lock(&space_info->lock); - if (can_overcommit(root, space_info, orig, flush)) { - spin_unlock(&space_info->lock); - break; - } if (list_empty(&space_info->tickets) && list_empty(&space_info->priority_tickets)) { spin_unlock(&space_info->lock); In my test case, it looks like what's happening is that most of the metadata reservation we have comes from delalloc extents. When someone comes along that isn't allowed to overcommit anymore, they queue up their ticket and kick the flusher. Then the flusher comes along and flushes a little bit of delalloc and sees "oh, we can overcommit now, we're good", but it still didn't free enough to actually fulfill the ticket, so the guy waiting still gets an ENOSPC. This fixes it for my reproducer, but I need to put together a smaller test case. ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk 2017-06-22 13:51 [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk jeffm 2017-06-22 13:51 ` [PATCH v2 2/2] btrfs: Simplify math in should_alloc chunk jeffm 2017-06-29 19:21 ` [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk Omar Sandoval @ 2017-07-10 17:23 ` David Sterba 2 siblings, 0 replies; 8+ messages in thread From: David Sterba @ 2017-07-10 17:23 UTC (permalink / raw) To: jeffm; +Cc: linux-btrfs On Thu, Jun 22, 2017 at 09:51:47AM -0400, jeffm@suse.com wrote: > From: Jeff Mahoney <jeffm@suse.com> > > In a heavy write scenario, we can end up with a large number of pinned bytes. > This can translate into (very) premature ENOSPC because pinned bytes > must be accounted for when allowing a reservation but aren't accounted for > when deciding whether to create a new chunk. > > This patch adds the accounting to should_alloc_chunk so that we can > create the chunk. > > Signed-off-by: Jeff Mahoney <jeffm@suse.com> I'm adding the two patches to for-next. More reviews welcome. ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2017-07-10 17:24 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2017-06-22 13:51 [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk jeffm 2017-06-22 13:51 ` [PATCH v2 2/2] btrfs: Simplify math in should_alloc chunk jeffm 2017-06-29 19:21 ` [PATCH v2 1/2] btrfs: account for pinned bytes in should_alloc_chunk Omar Sandoval 2017-06-29 19:49 ` Jeff Mahoney 2017-06-29 20:01 ` Omar Sandoval 2017-06-29 20:25 ` Nikolay Borisov 2017-06-29 22:09 ` Omar Sandoval 2017-07-10 17:23 ` David Sterba
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).