public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Pablo Neira Ayuso <pablo@netfilter.org>
To: Rongguang Wei <clementwei90@163.com>
Cc: netfilter-devel@vger.kernel.org, kadlec@netfilter.org,
	coreteam@netfilter.org, linux-kernel@vger.kernel.org,
	Rongguang Wei <weirongguang@kylinos.cn>
Subject: Re: [PATCH v1] netfilter: x_tables: fix ordering of get and update table private
Date: Wed, 16 Oct 2024 16:31:59 +0200	[thread overview]
Message-ID: <Zw_OXzBgfFULaEzs@calendula> (raw)
In-Reply-To: <20241016030909.64932-1-clementwei90@163.com>

On Wed, Oct 16, 2024 at 11:09:09AM +0800, Rongguang Wei wrote:
> From: Rongguang Wei <weirongguang@kylinos.cn>
> 
> Meet a kernel panic in ipt_do_table:
> PANIC: "Unable to handle kernel paging request at virtual address 00706f746b736564"

This patch is no correct.

> and the stack is:
>      PC: ffff5e1dbecf0750  [ipt_do_table+1432]
>      LR: ffff5e1dbecf04e4  [ipt_do_table+812]
>      SP: ffff8021f7643370  PSTATE: 20400009
>     X29: ffff8021f7643390  X28: ffff802900c3990c  X27: ffffa0405245a000
>     X26: ffff80400ad645a8  X25: ffffa0201c4d8000  X24: ffff5e1dbed00228
>     X23: ffff80400ad64738  X22: 0000000000000000  X21: ffff80400ad64000
>     X20: ffff802114980ae8  X19: ffff8021f7643570  X18: 00000007ea9ec175
>     X17: 0000fffde7b52460  X16: ffff5e1e181e8f20  X15: 0000fffd9a0ae078
>     X14: 610d273b56961dbc  X13: 0a08010100007ecb  X12: f5011880fd874f59
>     X11: ffff5e1dbed10600  X10: ffffa0405245a000   X9: 569b063f004015d5
>      X8: ffff80400ad64738   X7: 0000000000010002   X6: 0000000000000000
>      X5: 0000000000000000   X4: 0000000000000000   X3: 0000000000000000
>      X2: 0000000000000000   X1: 2e706f746b736564   X0: ffff80400ad65850
> [ffff8021f7643390] ipt_do_table at ffff5e1dbecf074c [ip_tables]
> [ffff8021f76434d0] iptable_filter_hook at ffff5e1dbfe700a4 [iptable_filter]
> [ffff8021f76434f0] nf_hook_slow at ffff5e1e18c31c2c
> [ffff8021f7643530] ip_forward at ffff5e1e18c41924
> [ffff8021f76435a0] ip_rcv_finish at ffff5e1e18c3fddc
> [ffff8021f76435d0] ip_rcv at ffff5e1e18c40214
> [ffff8021f7643630] __netif_receive_skb_one_core at ffff5e1e18bbbed4
> [ffff8021f7643670] __netif_receive_skb at ffff5e1e18bbbf3c
> [ffff8021f7643690] process_backlog at ffff5e1e18bbd52c
> [ffff8021f76436f0] __napi_poll at ffff5e1e18bbc464
> [ffff8021f7643730] net_rx_action at ffff5e1e18bbc9a8
> 
> The panic happend in ipt_do_table function:
> 
> 	private = READ_ONCE(table->private);
> 	jumpstack  = (struct ipt_entry **)private->jumpstack[cpu];
> 	[...]
> 	jumpstack[stackid++] = e;	// panic here
> 
> In vmcore, the cpu is 4, I read the private->jumpstack[cpu] is 007365325f6b6365,
> this address between user and kernel address ranges which caused kernel panic.
> Also the kmem shows that the private->jumpstack address is free.
> It looks like we get a UAF address here.
> 
> But in xt_replace_table function:
> 
> 	private = table->private;
> 	[...]
> 	smp_wmb();
> 	table->private = newtable_info;
> 	smp_mb();
> 
> It seems no chance to get a free private member in ipt_do_table.
> May have a ordering error which looks impossible:
> 
> 	smp_wmb();
> 	table->private = newtable_info;
> 	private = table->private;
> 	smp_mb();

Makes no sense to me.

> we get table->private after we set new table->private. After that, the
> private was free in xt_free_table_info and also used in ipt_do_table.
> Here use READ_ONCE to ensure we get private before we set the new one.

You better enable CONFIG_KASAN there and similar instrumentation to
check what really is going on there.

> Signed-off-by: Rongguang Wei <weirongguang@kylinos.cn>
> ---
>  net/netfilter/x_tables.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
> index da5d929c7c85..1ce7a4f268d6 100644
> --- a/net/netfilter/x_tables.c
> +++ b/net/netfilter/x_tables.c
> @@ -1399,7 +1399,7 @@ xt_replace_table(struct xt_table *table,
>  
>  	/* Do the substitution. */
>  	local_bh_disable();
> -	private = table->private;
> +	private = READ_ONCE(table->private);
>  
>  	/* Check inside lock: is the old number correct? */
>  	if (num_counters != private->number) {
> -- 
> 2.25.1
> 
> 

  reply	other threads:[~2024-10-16 14:32 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-16  3:09 [PATCH v1] netfilter: x_tables: fix ordering of get and update table private Rongguang Wei
2024-10-16 14:31 ` Pablo Neira Ayuso [this message]
2024-10-16 16:34   ` [netfilter-core] " Florian Westphal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zw_OXzBgfFULaEzs@calendula \
    --to=pablo@netfilter.org \
    --cc=clementwei90@163.com \
    --cc=coreteam@netfilter.org \
    --cc=kadlec@netfilter.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netfilter-devel@vger.kernel.org \
    --cc=weirongguang@kylinos.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox