linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Miao Xie <miaox@cn.fujitsu.com>
To: Linux Btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: [PATCH V3 1/2] Btrfs: cleanup duplicated division functions
Date: Sun, 23 Sep 2012 17:54:18 +0800	[thread overview]
Message-ID: <505EDC4A.1060709@cn.fujitsu.com> (raw)
In-Reply-To: <20120921152444.GV17430@twin.jikos.cz>

On Fri, 21 Sep 2012 17:24:44 +0200, David Sterba wrote:
> On Fri, Sep 21, 2012 at 05:07:46PM +0800, Miao Xie wrote:
>> --- a/fs/btrfs/ioctl.c
>> +++ b/fs/btrfs/ioctl.c
>> @@ -3335,6 +3335,24 @@ static long btrfs_ioctl_balance(struct file *file, void __user *arg)
>>  
>>  			goto do_balance;
>>  		}
>> +
>> +		if ((bargs->data.flags & BTRFS_BALANCE_ARGS_USAGE) &&
>> +		    (bargs->data.usage < 0 || bargs->data.usage > 100)) {
> 
> the 0 checks belong here
> 
>> +			ret = -EINVAL;
>> +			goto out_bargs;
>> +		}
>> +
>> +		if ((bargs->meta.flags & BTRFS_BALANCE_ARGS_USAGE) &&
>> +		    (bargs->meta.usage < 0 || bargs->meta.usage > 100)) {
>> +			ret = -EINVAL;
>> +			goto out_bargs;
>> +		}
>> +
>> +		if ((bargs->sys.flags & BTRFS_BALANCE_ARGS_USAGE) &&
>> +		    (bargs->sys.usage < 0 || bargs->sys.usage > 100)) {
>> +			ret = -EINVAL;
>> +			goto out_bargs;
>> +		}
>>  	} else {
>>  		bargs = NULL;
>>  	}
>> @@ -2347,7 +2335,8 @@ static int chunk_usage_filter(struct btrfs_fs_info *fs_info, u64 chunk_offset,
>>  	cache = btrfs_lookup_block_group(fs_info, chunk_offset);
>>  	chunk_used = btrfs_block_group_used(&cache->item);
>>  
>> -	user_thresh = div_factor_fine(cache->key.offset, bargs->usage);
>> +	BUG_ON(bargs->usage < 0 || bargs->usage > 100);
> 
> otherwise it reliably crashes here

Sorry, I don't know why it will crash here if we input 0. I tried to input 0,
and it worked well.

I think the only case we must take into account is the users might input the wrong value (>100 or <0)
on the old kernel, and it can be stored into the filesystem. If we mount this filesystem
on the new kernel, some problems may happen.

Thanks
Miao

  reply	other threads:[~2012-09-23  9:54 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-09-13 10:51 [PATCH 1/2] Btrfs: cleanup duplicated division functions Miao Xie
2012-09-14 13:54 ` David Sterba
2012-09-17  2:21   ` Miao Xie
2012-09-17 12:07     ` Ilya Dryomov
2012-09-17 16:31     ` David Sterba
2012-09-18  3:53       ` Miao Xie
2012-09-20  2:57 ` [PATCH V2 " Miao Xie
2012-09-20 13:28   ` David Sterba
2012-09-21  8:26     ` Miao Xie
2012-09-21  9:07   ` [PATCH V3 " Miao Xie
2012-09-21 15:24     ` David Sterba
2012-09-23  9:54       ` Miao Xie [this message]
2012-09-24 16:33         ` David Sterba
2012-09-23 11:49     ` Ilya Dryomov
2012-09-24  2:05       ` Miao Xie
2012-09-24 16:47         ` David Sterba
2012-09-24 18:42           ` Ilya Dryomov
2012-09-25 10:24             ` Miao Xie
2012-09-27 10:15           ` Miao Xie
2012-09-27 10:19     ` [PATCH V4 " Miao Xie
2012-09-27 16:56       ` Ilya Dryomov
2012-09-28  1:30         ` Miao Xie
2012-09-28  1:49       ` [PATCH V5 " Miao Xie
2012-09-28 10:09         ` David Sterba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=505EDC4A.1060709@cn.fujitsu.com \
    --to=miaox@cn.fujitsu.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).