* [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support @ 2010-02-05 1:43 Florian Westphal 2010-02-05 1:43 ` [PATCH 1/7] netfilter: ebtables: abort if next_offset is too small Florian Westphal ` (7 more replies) 0 siblings, 8 replies; 21+ messages in thread From: Florian Westphal @ 2010-02-05 1:43 UTC (permalink / raw) To: netfilter-devel Hello netfilter team, following patch series adds CONFIG_COMPAT support to ebtables (unfortunately tested on x86_64 only). First patches are minor bug fixes/preparation patches, patch 5 is the main CONFIG_COMPAT hunk, 6 and 7 add CONFIG_COMPAT support to the targets/matches that need special handling (maybe I missed some, though). include/linux/netfilter/x_tables.h | 2 +- net/bridge/netfilter/ebt_802_3.c | 2 +- net/bridge/netfilter/ebt_arp.c | 2 +- net/bridge/netfilter/ebt_arpreply.c | 2 +- net/bridge/netfilter/ebt_dnat.c | 2 +- net/bridge/netfilter/ebt_ip.c | 2 +- net/bridge/netfilter/ebt_ip6.c | 2 +- net/bridge/netfilter/ebt_limit.c | 18 +- net/bridge/netfilter/ebt_log.c | 2 +- net/bridge/netfilter/ebt_mark.c | 33 +- net/bridge/netfilter/ebt_mark_m.c | 39 ++- net/bridge/netfilter/ebt_nflog.c | 2 +- net/bridge/netfilter/ebt_pkttype.c | 2 +- net/bridge/netfilter/ebt_redirect.c | 2 +- net/bridge/netfilter/ebt_snat.c | 2 +- net/bridge/netfilter/ebt_stp.c | 2 +- net/bridge/netfilter/ebt_ulog.c | 2 +- net/bridge/netfilter/ebt_vlan.c | 2 +- net/bridge/netfilter/ebtables.c | 1064 ++++++++++++++++++++++++++++++++--- net/netfilter/x_tables.c | 6 +- Florian Westphal (7): netfilter: ebtables: abort if next_offset is too small netfilter: ebtables: avoid explicit XT_ALIGN() in match/targets netfilter: CONFIG_COMPAT: allow delta to exceed 32767 netfilter: ebtables: split do_replace into two functions netfilter: ebtables: add CONFIG_COMPAT support netfilter: ebt_limit: add CONFIG_COMPAT support netfilter: ebtables: mark: add CONFIG_COMPAT support ^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 1/7] netfilter: ebtables: abort if next_offset is too small 2010-02-05 1:43 [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support Florian Westphal @ 2010-02-05 1:43 ` Florian Westphal 2010-02-05 1:43 ` [PATCH 2/7] netfilter: ebtables: avoid explicit XT_ALIGN() in match/targets Florian Westphal ` (6 subsequent siblings) 7 siblings, 0 replies; 21+ messages in thread From: Florian Westphal @ 2010-02-05 1:43 UTC (permalink / raw) To: netfilter-devel next_offset must be > 0, otherwise this loops forever. The offset also contains the size of the ebt_entry structure itself, so anything smaller is invalid. Signed-off-by: Florian Westphal <fwestphal@astaro.com> --- net/bridge/netfilter/ebtables.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c index 12beb58..1939041 100644 --- a/net/bridge/netfilter/ebtables.c +++ b/net/bridge/netfilter/ebtables.c @@ -442,6 +442,8 @@ static int ebt_verify_pointers(struct ebt_replace *repl, break; if (left < e->next_offset) break; + if (e->next_offset < sizeof(struct ebt_entry)) + return -EINVAL; offset += e->next_offset; } } -- 1.6.4.4 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 2/7] netfilter: ebtables: avoid explicit XT_ALIGN() in match/targets 2010-02-05 1:43 [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support Florian Westphal 2010-02-05 1:43 ` [PATCH 1/7] netfilter: ebtables: abort if next_offset is too small Florian Westphal @ 2010-02-05 1:43 ` Florian Westphal 2010-02-05 1:43 ` [PATCH 3/7] netfilter: CONFIG_COMPAT: allow delta to exceed 32767 Florian Westphal ` (5 subsequent siblings) 7 siblings, 0 replies; 21+ messages in thread From: Florian Westphal @ 2010-02-05 1:43 UTC (permalink / raw) To: netfilter-devel From: Florian Westphal <fwestphal@astaro.com> This will cause trouble once CONFIG_COMPAT support is added to ebtables. xt_compat_*_offset() calculate the kernel/userland structure size delta using: XT_ALIGN(size) - COMPAT_XT_ALIGN(size) If the match/target sizes are aligned at registration time, delta is always zero. Should have zero effect for existing systems: xtables uses XT_ALIGN() whenever it deals with match/target sizes. Signed-off-by: Florian Westphal <fwestphal@astaro.com> --- net/bridge/netfilter/ebt_802_3.c | 2 +- net/bridge/netfilter/ebt_arp.c | 2 +- net/bridge/netfilter/ebt_arpreply.c | 2 +- net/bridge/netfilter/ebt_dnat.c | 2 +- net/bridge/netfilter/ebt_ip.c | 2 +- net/bridge/netfilter/ebt_ip6.c | 2 +- net/bridge/netfilter/ebt_limit.c | 2 +- net/bridge/netfilter/ebt_log.c | 2 +- net/bridge/netfilter/ebt_mark.c | 2 +- net/bridge/netfilter/ebt_mark_m.c | 2 +- net/bridge/netfilter/ebt_nflog.c | 2 +- net/bridge/netfilter/ebt_pkttype.c | 2 +- net/bridge/netfilter/ebt_redirect.c | 2 +- net/bridge/netfilter/ebt_snat.c | 2 +- net/bridge/netfilter/ebt_stp.c | 2 +- net/bridge/netfilter/ebt_ulog.c | 2 +- net/bridge/netfilter/ebt_vlan.c | 2 +- 17 files changed, 17 insertions(+), 17 deletions(-) diff --git a/net/bridge/netfilter/ebt_802_3.c b/net/bridge/netfilter/ebt_802_3.c index bd91dc5..5d11767 100644 --- a/net/bridge/netfilter/ebt_802_3.c +++ b/net/bridge/netfilter/ebt_802_3.c @@ -52,7 +52,7 @@ static struct xt_match ebt_802_3_mt_reg __read_mostly = { .family = NFPROTO_BRIDGE, .match = ebt_802_3_mt, .checkentry = ebt_802_3_mt_check, - .matchsize = XT_ALIGN(sizeof(struct ebt_802_3_info)), + .matchsize = sizeof(struct ebt_802_3_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_arp.c b/net/bridge/netfilter/ebt_arp.c index b7ad604..e727697 100644 --- a/net/bridge/netfilter/ebt_arp.c +++ b/net/bridge/netfilter/ebt_arp.c @@ -120,7 +120,7 @@ static struct xt_match ebt_arp_mt_reg __read_mostly = { .family = NFPROTO_BRIDGE, .match = ebt_arp_mt, .checkentry = ebt_arp_mt_check, - .matchsize = XT_ALIGN(sizeof(struct ebt_arp_info)), + .matchsize = sizeof(struct ebt_arp_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_arpreply.c b/net/bridge/netfilter/ebt_arpreply.c index 76584cd..f392e9d 100644 --- a/net/bridge/netfilter/ebt_arpreply.c +++ b/net/bridge/netfilter/ebt_arpreply.c @@ -78,7 +78,7 @@ static struct xt_target ebt_arpreply_tg_reg __read_mostly = { .hooks = (1 << NF_BR_NUMHOOKS) | (1 << NF_BR_PRE_ROUTING), .target = ebt_arpreply_tg, .checkentry = ebt_arpreply_tg_check, - .targetsize = XT_ALIGN(sizeof(struct ebt_arpreply_info)), + .targetsize = sizeof(struct ebt_arpreply_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_dnat.c b/net/bridge/netfilter/ebt_dnat.c index 6b49ea9..2bb40d7 100644 --- a/net/bridge/netfilter/ebt_dnat.c +++ b/net/bridge/netfilter/ebt_dnat.c @@ -54,7 +54,7 @@ static struct xt_target ebt_dnat_tg_reg __read_mostly = { (1 << NF_BR_LOCAL_OUT) | (1 << NF_BR_BROUTING), .target = ebt_dnat_tg, .checkentry = ebt_dnat_tg_check, - .targetsize = XT_ALIGN(sizeof(struct ebt_nat_info)), + .targetsize = sizeof(struct ebt_nat_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_ip.c b/net/bridge/netfilter/ebt_ip.c index d771bbf..5de6df6 100644 --- a/net/bridge/netfilter/ebt_ip.c +++ b/net/bridge/netfilter/ebt_ip.c @@ -110,7 +110,7 @@ static struct xt_match ebt_ip_mt_reg __read_mostly = { .family = NFPROTO_BRIDGE, .match = ebt_ip_mt, .checkentry = ebt_ip_mt_check, - .matchsize = XT_ALIGN(sizeof(struct ebt_ip_info)), + .matchsize = sizeof(struct ebt_ip_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_ip6.c b/net/bridge/netfilter/ebt_ip6.c index 784a657..bbf2534 100644 --- a/net/bridge/netfilter/ebt_ip6.c +++ b/net/bridge/netfilter/ebt_ip6.c @@ -122,7 +122,7 @@ static struct xt_match ebt_ip6_mt_reg __read_mostly = { .family = NFPROTO_BRIDGE, .match = ebt_ip6_mt, .checkentry = ebt_ip6_mt_check, - .matchsize = XT_ALIGN(sizeof(struct ebt_ip6_info)), + .matchsize = sizeof(struct ebt_ip6_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_limit.c b/net/bridge/netfilter/ebt_limit.c index f7bd919..9dd16e6 100644 --- a/net/bridge/netfilter/ebt_limit.c +++ b/net/bridge/netfilter/ebt_limit.c @@ -90,7 +90,7 @@ static struct xt_match ebt_limit_mt_reg __read_mostly = { .family = NFPROTO_BRIDGE, .match = ebt_limit_mt, .checkentry = ebt_limit_mt_check, - .matchsize = XT_ALIGN(sizeof(struct ebt_limit_info)), + .matchsize = sizeof(struct ebt_limit_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_log.c b/net/bridge/netfilter/ebt_log.c index e4ea3fd..e873924 100644 --- a/net/bridge/netfilter/ebt_log.c +++ b/net/bridge/netfilter/ebt_log.c @@ -195,7 +195,7 @@ static struct xt_target ebt_log_tg_reg __read_mostly = { .family = NFPROTO_BRIDGE, .target = ebt_log_tg, .checkentry = ebt_log_tg_check, - .targetsize = XT_ALIGN(sizeof(struct ebt_log_info)), + .targetsize = sizeof(struct ebt_log_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_mark.c b/net/bridge/netfilter/ebt_mark.c index 2fee7e8..153e167 100644 --- a/net/bridge/netfilter/ebt_mark.c +++ b/net/bridge/netfilter/ebt_mark.c @@ -59,7 +59,7 @@ static struct xt_target ebt_mark_tg_reg __read_mostly = { .family = NFPROTO_BRIDGE, .target = ebt_mark_tg, .checkentry = ebt_mark_tg_check, - .targetsize = XT_ALIGN(sizeof(struct ebt_mark_t_info)), + .targetsize = sizeof(struct ebt_mark_t_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_mark_m.c b/net/bridge/netfilter/ebt_mark_m.c index ea570f2..89abf40 100644 --- a/net/bridge/netfilter/ebt_mark_m.c +++ b/net/bridge/netfilter/ebt_mark_m.c @@ -41,7 +41,7 @@ static struct xt_match ebt_mark_mt_reg __read_mostly = { .family = NFPROTO_BRIDGE, .match = ebt_mark_mt, .checkentry = ebt_mark_mt_check, - .matchsize = XT_ALIGN(sizeof(struct ebt_mark_m_info)), + .matchsize = sizeof(struct ebt_mark_m_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_nflog.c b/net/bridge/netfilter/ebt_nflog.c index 2a63d99..40dbd24 100644 --- a/net/bridge/netfilter/ebt_nflog.c +++ b/net/bridge/netfilter/ebt_nflog.c @@ -51,7 +51,7 @@ static struct xt_target ebt_nflog_tg_reg __read_mostly = { .family = NFPROTO_BRIDGE, .target = ebt_nflog_tg, .checkentry = ebt_nflog_tg_check, - .targetsize = XT_ALIGN(sizeof(struct ebt_nflog_info)), + .targetsize = sizeof(struct ebt_nflog_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_pkttype.c b/net/bridge/netfilter/ebt_pkttype.c index 883e96e..e2a07e6 100644 --- a/net/bridge/netfilter/ebt_pkttype.c +++ b/net/bridge/netfilter/ebt_pkttype.c @@ -36,7 +36,7 @@ static struct xt_match ebt_pkttype_mt_reg __read_mostly = { .family = NFPROTO_BRIDGE, .match = ebt_pkttype_mt, .checkentry = ebt_pkttype_mt_check, - .matchsize = XT_ALIGN(sizeof(struct ebt_pkttype_info)), + .matchsize = sizeof(struct ebt_pkttype_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_redirect.c b/net/bridge/netfilter/ebt_redirect.c index c8a49f7..9be8fbc 100644 --- a/net/bridge/netfilter/ebt_redirect.c +++ b/net/bridge/netfilter/ebt_redirect.c @@ -59,7 +59,7 @@ static struct xt_target ebt_redirect_tg_reg __read_mostly = { (1 << NF_BR_BROUTING), .target = ebt_redirect_tg, .checkentry = ebt_redirect_tg_check, - .targetsize = XT_ALIGN(sizeof(struct ebt_redirect_info)), + .targetsize = sizeof(struct ebt_redirect_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_snat.c b/net/bridge/netfilter/ebt_snat.c index 8d04d4c..9c7b520 100644 --- a/net/bridge/netfilter/ebt_snat.c +++ b/net/bridge/netfilter/ebt_snat.c @@ -67,7 +67,7 @@ static struct xt_target ebt_snat_tg_reg __read_mostly = { .hooks = (1 << NF_BR_NUMHOOKS) | (1 << NF_BR_POST_ROUTING), .target = ebt_snat_tg, .checkentry = ebt_snat_tg_check, - .targetsize = XT_ALIGN(sizeof(struct ebt_nat_info)), + .targetsize = sizeof(struct ebt_nat_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_stp.c b/net/bridge/netfilter/ebt_stp.c index 75e29a9..92a93d3 100644 --- a/net/bridge/netfilter/ebt_stp.c +++ b/net/bridge/netfilter/ebt_stp.c @@ -177,7 +177,7 @@ static struct xt_match ebt_stp_mt_reg __read_mostly = { .family = NFPROTO_BRIDGE, .match = ebt_stp_mt, .checkentry = ebt_stp_mt_check, - .matchsize = XT_ALIGN(sizeof(struct ebt_stp_info)), + .matchsize = sizeof(struct ebt_stp_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_ulog.c b/net/bridge/netfilter/ebt_ulog.c index ce50688..c6ac657 100644 --- a/net/bridge/netfilter/ebt_ulog.c +++ b/net/bridge/netfilter/ebt_ulog.c @@ -275,7 +275,7 @@ static struct xt_target ebt_ulog_tg_reg __read_mostly = { .family = NFPROTO_BRIDGE, .target = ebt_ulog_tg, .checkentry = ebt_ulog_tg_check, - .targetsize = XT_ALIGN(sizeof(struct ebt_ulog_info)), + .targetsize = sizeof(struct ebt_ulog_info), .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_vlan.c b/net/bridge/netfilter/ebt_vlan.c index 3dddd48..be1dd2e 100644 --- a/net/bridge/netfilter/ebt_vlan.c +++ b/net/bridge/netfilter/ebt_vlan.c @@ -163,7 +163,7 @@ static struct xt_match ebt_vlan_mt_reg __read_mostly = { .family = NFPROTO_BRIDGE, .match = ebt_vlan_mt, .checkentry = ebt_vlan_mt_check, - .matchsize = XT_ALIGN(sizeof(struct ebt_vlan_info)), + .matchsize = sizeof(struct ebt_vlan_info), .me = THIS_MODULE, }; -- 1.6.4.4 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 3/7] netfilter: CONFIG_COMPAT: allow delta to exceed 32767 2010-02-05 1:43 [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support Florian Westphal 2010-02-05 1:43 ` [PATCH 1/7] netfilter: ebtables: abort if next_offset is too small Florian Westphal 2010-02-05 1:43 ` [PATCH 2/7] netfilter: ebtables: avoid explicit XT_ALIGN() in match/targets Florian Westphal @ 2010-02-05 1:43 ` Florian Westphal 2010-02-05 1:43 ` [PATCH 4/7] netfilter: ebtables: split do_replace into two functions Florian Westphal ` (4 subsequent siblings) 7 siblings, 0 replies; 21+ messages in thread From: Florian Westphal @ 2010-02-05 1:43 UTC (permalink / raw) To: netfilter-devel; +Cc: Florian Westphal with 32 bit userland and 64 bit kernels, it is unlikely but possible that insertion of new rules fails even tough there are only about 2000 iptables rules. This happens because the compat delta is using a short int. Easily reproducible via "iptables -m limit" ; after about 2050 rules inserting new ones fails with -ELOOP. Note that compat_delta included 2 bytes of padding on x86_64, so structure size remains the same. Signed-off-by: Florian Westphal <fw@strlen.de> --- include/linux/netfilter/x_tables.h | 2 +- net/netfilter/x_tables.c | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/netfilter/x_tables.h b/include/linux/netfilter/x_tables.h index 365fabe..2e342ad 100644 --- a/include/linux/netfilter/x_tables.h +++ b/include/linux/netfilter/x_tables.h @@ -583,7 +583,7 @@ extern void xt_compat_unlock(u_int8_t af); extern int xt_compat_add_offset(u_int8_t af, unsigned int offset, short delta); extern void xt_compat_flush_offsets(u_int8_t af); -extern short xt_compat_calc_jump(u_int8_t af, unsigned int offset); +extern int xt_compat_calc_jump(u_int8_t af, unsigned int offset); extern int xt_compat_match_offset(const struct xt_match *match); extern int xt_compat_match_from_user(struct xt_entry_match *m, diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c index f01955c..783ca37 100644 --- a/net/netfilter/x_tables.c +++ b/net/netfilter/x_tables.c @@ -37,7 +37,7 @@ MODULE_DESCRIPTION("{ip,ip6,arp,eb}_tables backend module"); struct compat_delta { struct compat_delta *next; unsigned int offset; - short delta; + int delta; }; struct xt_af { @@ -435,10 +435,10 @@ void xt_compat_flush_offsets(u_int8_t af) } EXPORT_SYMBOL_GPL(xt_compat_flush_offsets); -short xt_compat_calc_jump(u_int8_t af, unsigned int offset) +int xt_compat_calc_jump(u_int8_t af, unsigned int offset) { struct compat_delta *tmp; - short delta; + int delta; for (tmp = xt[af].compat_offsets, delta = 0; tmp; tmp = tmp->next) if (tmp->offset < offset) -- 1.6.4.4 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 4/7] netfilter: ebtables: split do_replace into two functions 2010-02-05 1:43 [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support Florian Westphal ` (2 preceding siblings ...) 2010-02-05 1:43 ` [PATCH 3/7] netfilter: CONFIG_COMPAT: allow delta to exceed 32767 Florian Westphal @ 2010-02-05 1:43 ` Florian Westphal 2010-02-05 1:43 ` [PATCH 5/7] netfilter: ebtables: add CONFIG_COMPAT support Florian Westphal ` (3 subsequent siblings) 7 siblings, 0 replies; 21+ messages in thread From: Florian Westphal @ 2010-02-05 1:43 UTC (permalink / raw) To: netfilter-devel; +Cc: Florian Westphal once CONFIG_COMPAT support is merged this allows to call do_replace_finish() after doing the CONFIG_COMPAT conversion instead of copy & pasting this. Signed-off-by: Florian Westphal <fw@strlen.de> --- net/bridge/netfilter/ebtables.c | 133 ++++++++++++++++++++------------------- 1 files changed, 69 insertions(+), 64 deletions(-) diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c index 1939041..a658de2 100644 --- a/net/bridge/netfilter/ebtables.c +++ b/net/bridge/netfilter/ebtables.c @@ -958,90 +958,44 @@ static void get_counters(struct ebt_counter *oldcounters, } } -/* replace the table */ -static int do_replace(struct net *net, void __user *user, unsigned int len) +static int do_replace_finish(struct net *net, struct ebt_replace *repl, + struct ebt_table_info *newinfo) { - int ret, i, countersize; - struct ebt_table_info *newinfo; - struct ebt_replace tmp; - struct ebt_table *t; + int ret, i; struct ebt_counter *counterstmp = NULL; /* used to be able to unlock earlier */ struct ebt_table_info *table; - - if (copy_from_user(&tmp, user, sizeof(tmp)) != 0) - return -EFAULT; - - if (len != sizeof(tmp) + tmp.entries_size) { - BUGPRINT("Wrong len argument\n"); - return -EINVAL; - } - - if (tmp.entries_size == 0) { - BUGPRINT("Entries_size never zero\n"); - return -EINVAL; - } - /* overflow check */ - if (tmp.nentries >= ((INT_MAX - sizeof(struct ebt_table_info)) / NR_CPUS - - SMP_CACHE_BYTES) / sizeof(struct ebt_counter)) - return -ENOMEM; - if (tmp.num_counters >= INT_MAX / sizeof(struct ebt_counter)) - return -ENOMEM; - - countersize = COUNTER_OFFSET(tmp.nentries) * nr_cpu_ids; - newinfo = vmalloc(sizeof(*newinfo) + countersize); - if (!newinfo) - return -ENOMEM; - - if (countersize) - memset(newinfo->counters, 0, countersize); - - newinfo->entries = vmalloc(tmp.entries_size); - if (!newinfo->entries) { - ret = -ENOMEM; - goto free_newinfo; - } - if (copy_from_user( - newinfo->entries, tmp.entries, tmp.entries_size) != 0) { - BUGPRINT("Couldn't copy entries from userspace\n"); - ret = -EFAULT; - goto free_entries; - } + struct ebt_table *t; /* the user wants counters back the check on the size is done later, when we have the lock */ - if (tmp.num_counters) { - counterstmp = vmalloc(tmp.num_counters * sizeof(*counterstmp)); - if (!counterstmp) { - ret = -ENOMEM; - goto free_entries; - } + if (repl->num_counters) { + counterstmp = vmalloc(repl->num_counters * sizeof(*counterstmp)); + if (!counterstmp) + return -ENOMEM; } - else - counterstmp = NULL; - /* this can get initialized by translate_table() */ newinfo->chainstack = NULL; - ret = ebt_verify_pointers(&tmp, newinfo); + ret = ebt_verify_pointers(repl, newinfo); if (ret != 0) goto free_counterstmp; - ret = translate_table(net, tmp.name, newinfo); + ret = translate_table(net, repl->name, newinfo); if (ret != 0) goto free_counterstmp; - t = find_table_lock(net, tmp.name, &ret, &ebt_mutex); + t = find_table_lock(net, repl->name, &ret, &ebt_mutex); if (!t) { ret = -ENOENT; goto free_iterate; } /* the table doesn't like it */ - if (t->check && (ret = t->check(newinfo, tmp.valid_hooks))) + if (t->check && (ret = t->check(newinfo, repl->valid_hooks))) goto free_unlock; - if (tmp.num_counters && tmp.num_counters != t->private->nentries) { + if (repl->num_counters && repl->num_counters != t->private->nentries) { BUGPRINT("Wrong nr. of counters requested\n"); ret = -EINVAL; goto free_unlock; @@ -1057,7 +1011,7 @@ static int do_replace(struct net *net, void __user *user, unsigned int len) module_put(t->me); /* we need an atomic snapshot of the counters */ write_lock_bh(&t->lock); - if (tmp.num_counters) + if (repl->num_counters) get_counters(t->private->counters, counterstmp, t->private->nentries); @@ -1068,10 +1022,9 @@ static int do_replace(struct net *net, void __user *user, unsigned int len) allocation. Only reason why this is done is because this way the lock is held only once, while this doesn't bring the kernel into a dangerous state. */ - if (tmp.num_counters && - copy_to_user(tmp.counters, counterstmp, - tmp.num_counters * sizeof(struct ebt_counter))) { - BUGPRINT("Couldn't copy counters to userspace\n"); + if (repl->num_counters && + copy_to_user(repl->counters, counterstmp, + repl->num_counters * sizeof(struct ebt_counter))) { ret = -EFAULT; } else @@ -1105,6 +1058,58 @@ free_counterstmp: vfree(newinfo->chainstack[i]); vfree(newinfo->chainstack); } + return ret; +} + +/* replace the table */ +static int do_replace(struct net *net, void __user *user, unsigned int len) +{ + int ret, countersize; + struct ebt_table_info *newinfo; + struct ebt_replace tmp; + + if (copy_from_user(&tmp, user, sizeof(tmp)) != 0) + return -EFAULT; + + if (len != sizeof(tmp) + tmp.entries_size) { + BUGPRINT("Wrong len argument\n"); + return -EINVAL; + } + + if (tmp.entries_size == 0) { + BUGPRINT("Entries_size never zero\n"); + return -EINVAL; + } + /* overflow check */ + if (tmp.nentries >= ((INT_MAX - sizeof(struct ebt_table_info)) / NR_CPUS - + SMP_CACHE_BYTES) / sizeof(struct ebt_counter)) + return -ENOMEM; + if (tmp.num_counters >= INT_MAX / sizeof(struct ebt_counter)) + return -ENOMEM; + + countersize = COUNTER_OFFSET(tmp.nentries) * nr_cpu_ids; + newinfo = vmalloc(sizeof(*newinfo) + countersize); + if (!newinfo) + return -ENOMEM; + + if (countersize) + memset(newinfo->counters, 0, countersize); + + newinfo->entries = vmalloc(tmp.entries_size); + if (!newinfo->entries) { + ret = -ENOMEM; + goto free_newinfo; + } + if (copy_from_user( + newinfo->entries, tmp.entries, tmp.entries_size) != 0) { + BUGPRINT("Couldn't copy entries from userspace\n"); + ret = -EFAULT; + goto free_entries; + } + + ret = do_replace_finish(net, &tmp, newinfo); + if (ret == 0) + return ret; free_entries: vfree(newinfo->entries); free_newinfo: -- 1.6.4.4 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 5/7] netfilter: ebtables: add CONFIG_COMPAT support 2010-02-05 1:43 [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support Florian Westphal ` (3 preceding siblings ...) 2010-02-05 1:43 ` [PATCH 4/7] netfilter: ebtables: split do_replace into two functions Florian Westphal @ 2010-02-05 1:43 ` Florian Westphal 2010-02-05 13:52 ` Florian Westphal 2010-02-07 22:43 ` Florian Westphal 2010-02-05 1:43 ` [PATCH 6/7] netfilter: ebt_limit: " Florian Westphal ` (2 subsequent siblings) 7 siblings, 2 replies; 21+ messages in thread From: Florian Westphal @ 2010-02-05 1:43 UTC (permalink / raw) To: netfilter-devel From: Florian Westphal <fwestphal@astaro.com> Main code for 32 bit userland ebtables binary with 64 bit kernels support. Tested on x86_64 kernel only, using 64bit ebtables binary for output comparision. At least ebt_mark, m_mark and ebt_limit need CONFIG_COMPAT hooks, too. remaining problem: The ebtables userland makefile has: ifeq ($(shell uname -m),sparc64) CFLAGS+=-DEBT_MIN_ALIGN=8 -DKERNEL_64_USERSPACE_32 endif struct ebt_replace, ebt_entry_match etc. then contain userland-side padding -- this means, if this is actually used in production -- that even if we are called from a 32 bit userland, the structures may already be in the right format. One possible solution would be to try to handle compat_* calls via the native handler and then fall back to the compat one if it fails, but it will cause kernel warnings like: kernel msg: ebtables bug: please report to author: Wrong size Thus this is not done here. I am not sure if this "userland-padding" is widely used; it looks incomplete (eg. I couldn't find any userland-padding for match/target structures; and some (eg. mark) do need special handling). Signed-off-by: Florian Westphal <fwestphal@astaro.com> --- net/bridge/netfilter/ebtables.c | 929 +++++++++++++++++++++++++++++++++++++++ 1 files changed, 929 insertions(+), 0 deletions(-) diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c index a658de2..1f2ca88 100644 --- a/net/bridge/netfilter/ebtables.c +++ b/net/bridge/netfilter/ebtables.c @@ -56,11 +56,39 @@ static DEFINE_MUTEX(ebt_mutex); +static int do_replace_finish(struct net *, struct ebt_replace *, + struct ebt_table_info *); +#ifdef CONFIG_COMPAT +static void ebt_standard_compat_from_user(void *dst, void *src) +{ + int v = *(compat_int_t *)src; + + if (v >= 0) + v += xt_compat_calc_jump(NFPROTO_BRIDGE, v); + memcpy(dst, &v, sizeof(v)); +} + +static int ebt_standard_compat_to_user(void __user *dst, void *src) +{ + compat_int_t cv = *(int *)src; + + if (cv >= 0) + cv -= xt_compat_calc_jump(NFPROTO_BRIDGE, cv); + return copy_to_user(dst, &cv, sizeof(cv)) ? -EFAULT : 0; +} +#endif + + static struct xt_target ebt_standard_target = { .name = "standard", .revision = 0, .family = NFPROTO_BRIDGE, .targetsize = sizeof(int), +#ifdef CONFIG_COMPAT + .compatsize = sizeof(compat_int_t), + .compat_from_user = ebt_standard_compat_from_user, + .compat_to_user = ebt_standard_compat_to_user, +#endif }; static inline int @@ -1429,10 +1457,905 @@ static int do_ebt_set_ctl(struct sock *sk, break; default: ret = -EINVAL; + } + return ret; +} + +#ifdef CONFIG_COMPAT +/* 32 bit-userspace compatibility definitions. */ +struct compat_ebt_replace { + char name[EBT_TABLE_MAXNAMELEN]; + compat_uint_t valid_hooks; + compat_uint_t nentries; + compat_uint_t entries_size; + /* start of the chains */ + compat_uptr_t hook_entry[NF_BR_NUMHOOKS]; + /* nr of counters userspace expects back */ + compat_uint_t num_counters; + /* where the kernel will put the old counters. */ + compat_uptr_t counters; + compat_uptr_t entries; +}; + +/* struct ebt_entry_match, _target and _watcher have same layout */ +struct compat_ebt_entry_mwt { + union { + char name[EBT_FUNCTION_MAXNAMELEN]; + compat_uptr_t ptr; + } u; + compat_uint_t match_size; + compat_uint_t data[0]; +}; + +static int compat_table_info(const struct ebt_table_info *, struct compat_ebt_replace *); + +/* account for possible padding between match_size and ->data */ +static int ebt_compat_entry_padsize(void) +{ + BUILD_BUG_ON(XT_ALIGN(sizeof(struct ebt_entry_match)) < COMPAT_XT_ALIGN(sizeof(struct compat_ebt_entry_mwt))); + return (int) XT_ALIGN(sizeof(struct ebt_entry_match)) - COMPAT_XT_ALIGN(sizeof(struct compat_ebt_entry_mwt)); +} + +static int ebt_compat_match_offset(const struct xt_match *match, unsigned int userlen) +{ + /* + * ebt_among needs special handling. The kernel .matchsize is set to -1 at + * registration time; at runtime an EBT_ALIGN()ed value is expected. + * Example: userspace sends 4500, ebt_among.c wants 4504. + */ + if (unlikely(match->matchsize == -1)) + return XT_ALIGN(userlen) - COMPAT_XT_ALIGN(userlen); + return xt_compat_match_offset(match); +} + +static int compat_match_to_user(struct ebt_entry_match *m, void __user **dstptr, + unsigned int *size) +{ + const struct xt_match *match = m->u.match; + struct compat_ebt_entry_mwt __user *cm = *dstptr; + int off = ebt_compat_match_offset(match, m->match_size); + compat_uint_t msize = m->match_size - off; + + BUG_ON(off >= m->match_size); + + if (copy_to_user(cm->u.name, match->name, + strlen(match->name) + 1) || put_user(msize, &cm->match_size)) + return -EFAULT; + + if (match->compat_to_user) { + if (match->compat_to_user(cm->data, m->data)) + return -EFAULT; + } else if (copy_to_user(cm->data, m->data, msize)) + return -EFAULT; + + *size -= ebt_compat_entry_padsize() + off; + *dstptr = cm->data; + *dstptr += msize; + return 0; +} + +static int compat_target_to_user(struct ebt_entry_target *t, void __user **dstptr, + unsigned int *size) +{ + const struct xt_target *target = t->u.target; + struct compat_ebt_entry_mwt __user *cm = *dstptr; + int off = xt_compat_target_offset(target); + compat_uint_t tsize = t->target_size - off; + + BUG_ON(off >= t->target_size); + + if (copy_to_user(cm->u.name, target->name, + strlen(target->name) + 1) || put_user(tsize, &cm->match_size)) + return -EFAULT; + + if (target->compat_to_user) { + if (target->compat_to_user(cm->data, t->data)) + return -EFAULT; + } else if (copy_to_user(cm->data, t->data, tsize)) + return -EFAULT; + + *size -= ebt_compat_entry_padsize() + off; + *dstptr = cm->data; + *dstptr +=tsize; + return 0; +} + +static int compat_watcher_to_user(struct ebt_entry_watcher *w, void __user **dstptr, + unsigned int *size) +{ + return compat_target_to_user((struct ebt_entry_target *)w, dstptr, size); +} + +static int compat_copy_entry_to_user(struct ebt_entry *e, void __user **dstptr, + unsigned int *size, unsigned int *i) +{ + struct ebt_entry_target *t; + struct ebt_entry __user *ce; + u32 watchers_offset, target_offset, next_offset; + compat_uint_t origsize; + int ret; + + if (e->bitmask == 0) { + if (*size < sizeof(struct ebt_entries)) + return -EINVAL; + if (copy_to_user(*dstptr, e, sizeof(struct ebt_entries))) + return -EFAULT; + + *dstptr += sizeof(struct ebt_entries); + *size -= sizeof(struct ebt_entries); + return 0; + } + + if (*size < sizeof(*ce)) + return -EINVAL; + + ce = (struct ebt_entry __user *)*dstptr; + if (copy_to_user(ce, e, sizeof(*ce))) + return -EFAULT; + + origsize = *size; + *dstptr += sizeof(*ce); + + ret = EBT_MATCH_ITERATE(e, compat_match_to_user, dstptr, size); + if (ret) + return ret; + watchers_offset = e->watchers_offset - (origsize - *size); + + ret = EBT_WATCHER_ITERATE(e, compat_watcher_to_user, dstptr, size); + if (ret) + return ret; + target_offset = e->target_offset - (origsize - *size); + + t = (struct ebt_entry_target *) ((char *) e + e->target_offset); + + ret = compat_target_to_user(t, dstptr, size); + if (ret) + return ret; + next_offset = e->next_offset - (origsize - *size); + + if (put_user(watchers_offset, &ce->watchers_offset) || + put_user(target_offset, &ce->target_offset) || + put_user(next_offset, &ce->next_offset)) + return -EFAULT; + + *size -= sizeof(*ce); + (*i)++; + return 0; +} + +static int compat_copy_everything_to_user(struct ebt_table *t, void __user *user, + int *len, int cmd) +{ + struct compat_ebt_replace repl, tmp; + struct ebt_counter *counterstmp, *oldcounters; + struct ebt_table_info tinfo; + unsigned int size, i; + int ret; + char __user *buf; + void __user *pos; + + memset(&tinfo, 0, sizeof(tinfo)); + + if (cmd == EBT_SO_GET_ENTRIES) { + tinfo.entries_size = t->private->entries_size; + tinfo.nentries = t->private->nentries; + tinfo.entries = t->private->entries; + oldcounters = t->private->counters; + } else { + tinfo.entries_size = t->table->entries_size; + tinfo.nentries = t->table->nentries; + tinfo.entries = t->table->entries; + oldcounters = t->table->counters; + } + + if (copy_from_user(&tmp, user, sizeof(tmp))) + return -EFAULT; + + if (tmp.nentries != tinfo.nentries) { + BUGPRINT("Nentries wrong\n"); + return -EINVAL; + } + + buf = compat_ptr(tmp.entries); + size = tmp.entries_size; + + memcpy(&repl, &tmp, sizeof(repl)); + if (cmd == EBT_SO_GET_ENTRIES) + ret = compat_table_info(t->private, &repl); + else + ret = compat_table_info(&tinfo, &repl); + if (ret) + return ret; + + WARN_ON(tinfo.entries_size - xt_compat_calc_jump(NFPROTO_BRIDGE, tinfo.entries_size) != repl.entries_size); + + if (*len != sizeof(tmp) + repl.entries_size + + (tmp.num_counters? tinfo.nentries * sizeof(struct ebt_counter): 0)) { + pr_err("size mismatch: *len %d, entries_size %u, delta %d xe %d\n", + *len, tinfo.entries_size, xt_compat_calc_jump(NFPROTO_BRIDGE, tinfo.entries_size), repl.entries_size); + return -EINVAL; + } + /* userspace might not need the counters */ + if (tmp.num_counters) { + if (tmp.num_counters != tinfo.nentries) { + BUGPRINT("Num_counters wrong\n"); + return -EINVAL; + } + counterstmp = vmalloc(tinfo.nentries * sizeof(*counterstmp)); + if (!counterstmp) + return -ENOMEM; + + write_lock_bh(&t->lock); + get_counters(oldcounters, counterstmp, tinfo.nentries); + write_unlock_bh(&t->lock); + + buf = compat_ptr(tmp.counters); + if (copy_to_user(buf, counterstmp, tinfo.nentries * sizeof(*counterstmp))) { + vfree(counterstmp); + return -EFAULT; + } + vfree(counterstmp); + } + + buf = compat_ptr(tmp.entries); + pos = buf; + size = tmp.entries_size; + + return EBT_ENTRY_ITERATE(tinfo.entries, tinfo.entries_size, compat_copy_entry_to_user, + &pos, &size, &i); +} + +struct ebt_entries_buf_state { + char *buf_kern_start; /* kernel buffer to copy (translated) data to */ + u32 buf_kern_len; /* total size of kernel buffer */ + u32 buf_kern_offset; /* amount of data copied so far */ + u32 buf_user_offset; /* read position in userspace buffer */ + unsigned char **hook_entry; /* pointer to ebt_replace hook_entry */ +}; + +static int ebt_buf_count(struct ebt_entries_buf_state *state, unsigned int sz) +{ + state->buf_kern_offset += sz; + return state->buf_kern_offset >= sz ? 0 : -EINVAL; +} + +/* same as above, but use data from a kernel buffer instead of reading user data */ +static int ebt_buf_add(struct ebt_entries_buf_state *state, + void *data, unsigned int sz) +{ + BUG_ON(state->buf_user_offset > state->buf_kern_offset); + + if (state->buf_kern_start == NULL) + goto count_only; + + BUG_ON(state->buf_kern_offset > state->buf_kern_len); + BUG_ON(state->buf_kern_offset + sz > state->buf_kern_len); + + memcpy(state->buf_kern_start + state->buf_kern_offset, data, sz); + + count_only: + state->buf_user_offset += sz; + return ebt_buf_count(state, sz); +} + +static int ebt_buf_add_pad(struct ebt_entries_buf_state *state, unsigned int sz) +{ + char *b = state->buf_kern_start; + + BUG_ON(state->buf_user_offset > state->buf_kern_offset); + BUG_ON(b && state->buf_kern_offset > state->buf_kern_len); + + if (b != NULL) + memset(b + state->buf_kern_offset, 0, sz); + /* do not adjust ->buf_user_offset here, we added kernel-side padding */ + return ebt_buf_count(state, sz); +} + +enum compat_mwt { + EBT_COMPAT_MATCH, + EBT_COMPAT_WATCHER, + EBT_COMPAT_TARGET, +}; + +static int compat_mtw_from_user(struct compat_ebt_entry_mwt *mwt, enum compat_mwt compat_mwt, + struct ebt_entries_buf_state *state, const unsigned char *base) +{ + char name[EBT_FUNCTION_MAXNAMELEN]; + struct xt_match *match; + struct xt_target *wt; + void *dst = NULL; + int off, pad = 0, ret = 0; + unsigned int size_kern, entry_offset, match_size = mwt->match_size; + + strlcpy(name, mwt->u.name, sizeof(name)); + + if (state->buf_kern_start) + dst = state->buf_kern_start + state->buf_kern_offset; + + entry_offset = (unsigned char *) mwt - base; + switch (compat_mwt) { + case EBT_COMPAT_MATCH: + match = try_then_request_module(xt_find_match(NFPROTO_BRIDGE, name, 0), "ebt_%s", name); + if (match == NULL) + return -ENOENT; + if (IS_ERR(match)) + return PTR_ERR(match); + + off = ebt_compat_match_offset(match, match_size); + if (dst) { + if (match->compat_from_user) + match->compat_from_user(dst, mwt->data); + else + memcpy(dst, mwt->data, match_size); + } + + size_kern = match->matchsize; + if (unlikely(size_kern == -1)) + size_kern = match_size; + module_put(match->me); + break; + case EBT_COMPAT_WATCHER: /* fallthrough */ + case EBT_COMPAT_TARGET: + wt = try_then_request_module(xt_find_target(NFPROTO_BRIDGE, name, 0), "ebt_%s", name); + if (wt == NULL) + return -ENOENT; + if (IS_ERR(wt)) + return PTR_ERR(wt); + off = xt_compat_target_offset(wt); + + if (dst) { + if (wt->compat_from_user) + wt->compat_from_user(dst, mwt->data); + else + memcpy(dst, mwt->data, match_size); + } + + size_kern = wt->targetsize; + module_put(wt->me); + break; + default: + BUG(); + } + + if (!dst) { + ret = xt_compat_add_offset(NFPROTO_BRIDGE, entry_offset, off + ebt_compat_entry_padsize()); + if (ret < 0) + return ret; + } + + state->buf_kern_offset += match_size + off; + state->buf_user_offset += match_size; + pad = XT_ALIGN(size_kern) - size_kern; + + if (pad > 0 && dst) { + BUG_ON(state->buf_kern_offset <= pad); + BUG_ON(state->buf_kern_offset - (match_size + off) + size_kern > state->buf_kern_len - pad); + memset(dst + size_kern, 0, pad); + } + BUG_ON(state->buf_kern_len && state->buf_kern_offset > state->buf_kern_len); + return off + match_size; +} + + +/* return size of all matches, watchers or target, including necessary alignment padding */ +static int ebt_size_mwt(struct compat_ebt_entry_mwt *match32, unsigned int size_user, enum compat_mwt type, + struct ebt_entries_buf_state *state, const void *base) +{ + int growth = 0; + unsigned int size_left = size_user; + char *buf; + + if (size_user == 0) + return 0; + + buf = (char *) match32; + + while (size_left >= sizeof(*match32)) { + struct ebt_entry_match *match_kern = (struct ebt_entry_match *) state->buf_kern_start; + int ret; + + if (match_kern) + match_kern = (struct ebt_entry_match *) (state->buf_kern_start + state->buf_kern_offset); + + ret = ebt_buf_add(state, buf, sizeof(*match32)); + if (ret < 0) + return ret; + size_left -= sizeof(*match32); + + /* match_size may be changed later, also add padding before -> data */ + ret = ebt_buf_add_pad(state, ebt_compat_entry_padsize()); + if (ret < 0) + return ret; + + if (match32->match_size > size_left) + return -EINVAL; + + size_left -= match32->match_size; + + ret = compat_mtw_from_user(match32, type, state, base); + if (ret < 0) + return ret; + + BUG_ON(ret < match32->match_size); + growth += ret - match32->match_size; + growth += ebt_compat_entry_padsize(); + + buf += sizeof(*match32); + buf += match32->match_size; + + if (match_kern) + match_kern->match_size = ret; + + pr_debug("%s: parsed entry, match_size was %u, size_user %u, size_left %u, type %d ret %d, growth %d\n", + __func__, match32->match_size, size_user, size_left, type, ret, growth); + WARN_ON(type == EBT_COMPAT_TARGET && size_left); + match32 = (struct compat_ebt_entry_mwt*) buf; + } + + return growth; +} + +#define EBT_COMPAT_WATCHER_ITERATE(e, fn, args...) \ +({ \ + unsigned int __i; \ + int __ret = 0; \ + struct compat_ebt_entry_mwt *__watcher; \ + \ + for (__i = e->watchers_offset; \ + __i < (e)->target_offset; \ + __i += __watcher->watcher_size + \ + sizeof(struct compat_ebt_entry_mwt)) { \ + __watcher = (void *)(e) + __i; \ + \ + __ret = fn(__watcher , ## args); \ + if (__ret != 0) \ + break; \ + } \ + if (__ret == 0) { \ + if (__i != (e)->target_offset) \ + __ret = -EINVAL; \ + } \ + __ret; \ +}) + +#define EBT_COMPAT_MATCH_ITERATE(e, fn, args...) \ +({ \ + unsigned int __i; \ + int __ret = 0; \ + struct compat_ebt_entry_mwt *__match; \ + \ + for (__i = sizeof(struct ebt_entry); \ + __i < (e)->watchers_offset; \ + __i += __match->match_size + \ + sizeof(struct compat_ebt_entry_mwt)) { \ + __match = (void *)(e) + __i; \ + \ + __ret = fn(__match , ## args); \ + if (__ret != 0) \ + break; \ + } \ + if (__ret == 0) { \ + if (__i != (e)->watchers_offset) \ + __ret = -EINVAL; \ + } \ + __ret; \ +}) + +/* called for all ebt_entry structures. */ +static int size_entry_match_watchers_target(struct ebt_entry *entry, const unsigned char *base, unsigned int *total, + struct ebt_entries_buf_state *state) +{ + unsigned int i, j, size_user, startoff, new_offset = 0; + unsigned int offsets[4]; /* match, watchers, targets and offset of the next struct ebt_entry */ + unsigned int *offsets_update = NULL; + int ret; + char *buf_start = (char *) entry; + + size_user = *total; + + if (size_user < sizeof(struct ebt_entries)) + return -EINVAL; + + if (!entry->bitmask) { + *total -= sizeof(struct ebt_entries); + return ebt_buf_add(state, entry, sizeof(struct ebt_entries)); + } + if (size_user < sizeof(*entry) || entry->next_offset < sizeof(*entry)) + return -EINVAL; + + startoff = state->buf_user_offset; + /* pull in the most part of ebt_entry, it does not need to be changed. */ + ret = ebt_buf_add(state, entry, offsetof(struct ebt_entry, watchers_offset)); + if (ret < 0) + return ret; + + offsets[0] = sizeof(struct ebt_entry); /* matches come first */ + memcpy(&offsets[1], &entry->watchers_offset, + sizeof(offsets) - sizeof(offsets[0])); + + if (state->buf_kern_start) + offsets_update = (unsigned int *) (state->buf_kern_start + state->buf_kern_offset); + ret = ebt_buf_add(state, &offsets[1], sizeof(offsets) - sizeof(offsets[0])); + if (ret < 0) + return ret; + /* + * 0: matches offset, always follows ebt_entry. + * 1: watchers offset, from ebt_entry structure + * 2: target offset, from ebt_entry structure + * 3: next ebt_entry offset, from ebt_entry structure + * + * offsets are relative to beginning of struct ebt_entry (i.e., 0). + */ + for (i = 0, j = 1 ; j < 4 ; j++, i++) { + char *buf = buf_start; + + pr_debug("offsets[%u] %u, offsets[%u], %u, size_user %u\n", + i, offsets[i], j, offsets[j], size_user); + + buf = buf_start + offsets[i]; + if (offsets[i] > offsets[j]) + return -EINVAL; + + ret = ebt_size_mwt((void *) buf, offsets[j] - offsets[i], i, state, base); + if (ret < 0) + return ret; + new_offset += ret; + if (offsets_update && new_offset) { + pr_debug("ebtables: change offset %d to %d\n", + offsets_update[i], offsets[j] + new_offset); + offsets_update[i] = offsets[j] + new_offset; + } + } + + startoff = state->buf_user_offset - startoff; + + BUG_ON(*total < startoff); + *total -= startoff; + return 0; +} + +/* + * repl->entries_size is the size of the ebt_entry blob in userspace. + * It might need more memory when copied to a 64 bit kernel in case + * userspace is 32-bit. So, first task: find out how much memory is needed. + * + * Called before validation is performed. + */ +static int compat_copy_entries(unsigned char *data, unsigned int size_user, struct ebt_entries_buf_state *state) +{ + unsigned int size_remaining = size_user; + int ret; + + ret = EBT_ENTRY_ITERATE(data, size_user, size_entry_match_watchers_target, data, &size_remaining, state); + if (ret < 0) + return ret; + + WARN_ON(size_remaining); + WARN_ON(size_user + xt_compat_calc_jump(NFPROTO_BRIDGE, size_user) != state->buf_kern_offset); + return state->buf_kern_offset; +} + + +static int compat_copy_ebt_replace_from_user(struct ebt_replace *repl, void __user *user, unsigned int len) +{ + struct compat_ebt_replace tmp; + int i; + + if (len < sizeof(tmp)) + return -EINVAL; + + if (copy_from_user(&tmp, user, sizeof(tmp))) + return -EFAULT; + + if (len != sizeof(tmp) + tmp.entries_size) + return -EINVAL; + + if (tmp.entries_size == 0) + return -EINVAL; + + if (tmp.nentries >= ((INT_MAX - sizeof(struct ebt_table_info)) / NR_CPUS - + SMP_CACHE_BYTES) / sizeof(struct ebt_counter)) + return -ENOMEM; + if (tmp.num_counters >= INT_MAX / sizeof(struct ebt_counter)) + return -ENOMEM; + + memcpy(repl, &tmp, offsetof(struct ebt_replace, hook_entry)); + + /* starting with hook_entry, 32 vs. 64 bit structures are different */ + for (i = 0; i < NF_BR_NUMHOOKS; i++) + repl->hook_entry[i] = compat_ptr(tmp.hook_entry[i]); + + repl->num_counters = tmp.num_counters; + repl->counters = compat_ptr(tmp.counters); + repl->entries = compat_ptr(tmp.entries); + return 0; +} + +static int compat_do_replace(struct net *net, void __user *user, unsigned int len) +{ + int ret, i, countersize, size64; + struct ebt_table_info *newinfo; + struct ebt_replace tmp; + struct ebt_entries_buf_state state; + void *entries_tmp; + + if (compat_copy_ebt_replace_from_user(&tmp, user, len)) + return -EFAULT; + + countersize = COUNTER_OFFSET(tmp.nentries) * nr_cpu_ids; + newinfo = vmalloc(sizeof(*newinfo) + countersize); + if (!newinfo) + return -ENOMEM; + + if (countersize) + memset(newinfo->counters, 0, countersize); + + memset(&state, 0, sizeof(state)); + + newinfo->entries = vmalloc(tmp.entries_size); + if (!newinfo->entries) { + ret = -ENOMEM; + goto free_newinfo; + } + if (copy_from_user( + newinfo->entries, tmp.entries, tmp.entries_size) != 0) { + ret = -EFAULT; + goto free_entries; + } + + entries_tmp = newinfo->entries; + + xt_compat_lock(NFPROTO_BRIDGE); + + ret = compat_copy_entries(entries_tmp, tmp.entries_size, &state); + if (ret < 0) + goto out_unlock; + + pr_debug("tmp.entries_size %d, kern offset %d, user offset %d delta %d\n", + tmp.entries_size, state.buf_kern_offset, state.buf_user_offset, + xt_compat_calc_jump(NFPROTO_BRIDGE, tmp.entries_size)); + + size64 = ret; + newinfo->entries = vmalloc(size64); + if (!newinfo->entries) { + vfree(entries_tmp); + ret = -ENOMEM; + goto out_unlock; + } + + memset(&state, 0, sizeof(state)); + state.buf_kern_start = newinfo->entries; + state.buf_kern_len = size64; + + ret = compat_copy_entries(entries_tmp, tmp.entries_size, &state); + BUG_ON(ret < 0); /* parses same data again */ + + vfree(entries_tmp); + tmp.entries_size = size64; + + for (i = 0; i < NF_BR_NUMHOOKS; i++) { + char __user *usrptr; + if (tmp.hook_entry[i]) { + unsigned int delta = (char __user *) tmp.hook_entry[i] - tmp.entries; + pr_debug("hook_entry %d: (%p) delta %d\n", + i, tmp.hook_entry[i], xt_compat_calc_jump(NFPROTO_BRIDGE, delta)); + usrptr = (char __user *) tmp.hook_entry[i]; + usrptr += xt_compat_calc_jump(NFPROTO_BRIDGE, delta); + tmp.hook_entry[i] = (struct ebt_entries __user *) usrptr; + } + } + + xt_compat_flush_offsets(NFPROTO_BRIDGE); + xt_compat_unlock(NFPROTO_BRIDGE); + + ret = do_replace_finish(net, &tmp, newinfo); + if (ret == 0) + return ret; +free_entries: + vfree(newinfo->entries); +free_newinfo: + vfree(newinfo); + return ret; +out_unlock: + xt_compat_flush_offsets(NFPROTO_BRIDGE); + xt_compat_unlock(NFPROTO_BRIDGE); + goto free_entries; +} + +/* userspace just supplied us with counters */ +static int compat_update_counters(struct net *net, void __user *user, unsigned int len) +{ + int i, ret; + struct ebt_counter *tmp; + struct compat_ebt_replace hlp; + struct ebt_table *t; + + if (copy_from_user(&hlp, user, sizeof(hlp))) + return -EFAULT; + + if (len != sizeof(hlp) + hlp.num_counters * sizeof(struct ebt_counter)) + return -EINVAL; + if (hlp.num_counters == 0) + return -EINVAL; + + if (!(tmp = vmalloc(hlp.num_counters * sizeof(*tmp)))) + return -ENOMEM; + + t = find_table_lock(net, hlp.name, &ret, &ebt_mutex); + if (!t) + goto free_tmp; + + if (hlp.num_counters != t->private->nentries) { + ret = -EINVAL; + goto unlock_mutex; + } + + if ( copy_from_user(tmp, compat_ptr(hlp.counters), + hlp.num_counters * sizeof(struct ebt_counter)) ) { + ret = -EFAULT; + goto unlock_mutex; + } + + /* we want an atomic add of the counters */ + write_lock_bh(&t->lock); + + /* we add to the counters of the first cpu */ + for (i = 0; i < hlp.num_counters; i++) { + t->private->counters[i].pcnt += tmp[i].pcnt; + t->private->counters[i].bcnt += tmp[i].bcnt; + } + + write_unlock_bh(&t->lock); + ret = 0; +unlock_mutex: + mutex_unlock(&ebt_mutex); +free_tmp: + vfree(tmp); + return ret; +} + +static int compat_do_ebt_set_ctl(struct sock *sk, + int cmd, void __user *user, unsigned int len) +{ + int ret; + + if (!capable(CAP_NET_ADMIN)) + return -EPERM; + + switch(cmd) { + case EBT_SO_SET_ENTRIES: + ret = compat_do_replace(sock_net(sk), user, len); + break; + case EBT_SO_SET_COUNTERS: + ret = compat_update_counters(sock_net(sk), user, len); + break; + default: + ret = -EINVAL; } return ret; } +static int compat_calc_match(struct ebt_entry_match *m, int *off) +{ + *off += ebt_compat_match_offset(m->u.match, m->match_size); + *off += ebt_compat_entry_padsize(); + return 0; +} + +static int compat_calc_watcher(struct ebt_entry_watcher *w, int *off) +{ + *off += xt_compat_target_offset(w->u.watcher); + *off += ebt_compat_entry_padsize(); + return 0; +} + +static int compat_calc_entry(const struct ebt_entry *e, + const struct ebt_table_info *info, + const void *base, struct compat_ebt_replace *newinfo) +{ + const struct ebt_entry_target *t; + unsigned int entry_offset; + int off, ret, i; + + if (e->bitmask == 0) + return 0; + + off = 0; + entry_offset = (void *)e - base; + + EBT_MATCH_ITERATE(e, compat_calc_match, &off); + EBT_WATCHER_ITERATE(e, compat_calc_watcher, &off); + + t = (const struct ebt_entry_target *) ((char *) e + e->target_offset); + + off += xt_compat_target_offset(t->u.target) + ebt_compat_entry_padsize(); + + newinfo->entries_size -= off; + + ret = xt_compat_add_offset(NFPROTO_BRIDGE, entry_offset, off); + if (ret) + return ret; + + for (i = 0; i < NF_BR_NUMHOOKS; i++) { + if (info->hook_entry[i] && + (e < (struct ebt_entry *)(base - (void *) info->hook_entry[i]))) { + newinfo->hook_entry[i] -= off; + pr_debug("0x%08X -> 0x%08X\n",newinfo->hook_entry[i] + off, newinfo->hook_entry[i]); + } + } + + return 0; +} + +static int compat_table_info(const struct ebt_table_info *info, struct compat_ebt_replace *newinfo) +{ + unsigned int size = info->entries_size; + const void *entries = info->entries; + + newinfo->entries_size = size; + + return EBT_ENTRY_ITERATE(entries, size, compat_calc_entry, info, entries, newinfo); +} + +static int compat_do_ebt_get_ctl(struct sock *sk, int cmd, void __user *user, int *len) +{ + int ret; + struct compat_ebt_replace tmp; + struct ebt_table *t; + + if (!capable(CAP_NET_ADMIN)) + return -EPERM; + + if (copy_from_user(&tmp, user, sizeof(tmp))) + return -EFAULT; + + t = find_table_lock(sock_net(sk), tmp.name, &ret, &ebt_mutex); + if (!t) + return ret; + + xt_compat_lock(NFPROTO_BRIDGE); + switch(cmd) { + case EBT_SO_GET_INFO: + tmp.nentries = t->private->nentries; + ret = compat_table_info(t->private, &tmp); + + if (ret) + goto out; + tmp.valid_hooks = t->valid_hooks; + + if (copy_to_user(user, &tmp, *len) != 0) { + ret = -EFAULT; + break; + } + ret = 0; + break; + case EBT_SO_GET_INIT_INFO: + tmp.nentries = t->table->nentries; + tmp.entries_size = t->table->entries_size; + tmp.valid_hooks = t->table->valid_hooks; + + if (copy_to_user(user, &tmp, *len) != 0) { + ret = -EFAULT; + break; + } + ret = 0; + break; + case EBT_SO_GET_ENTRIES: + case EBT_SO_GET_INIT_ENTRIES: + ret = compat_copy_everything_to_user(t, user, len, cmd); + break; + default: + ret = -EINVAL; + } + out: + xt_compat_flush_offsets(NFPROTO_BRIDGE); + xt_compat_unlock(NFPROTO_BRIDGE); + mutex_unlock(&ebt_mutex); + return ret; +} +#endif + + static int do_ebt_get_ctl(struct sock *sk, int cmd, void __user *user, int *len) { int ret; @@ -1492,9 +2415,15 @@ static struct nf_sockopt_ops ebt_sockopts = .set_optmin = EBT_BASE_CTL, .set_optmax = EBT_SO_SET_MAX + 1, .set = do_ebt_set_ctl, +#ifdef CONFIG_COMPAT + .compat_set = compat_do_ebt_set_ctl, +#endif .get_optmin = EBT_BASE_CTL, .get_optmax = EBT_SO_GET_MAX + 1, .get = do_ebt_get_ctl, +#ifdef CONFIG_COMPAT + .compat_get = compat_do_ebt_get_ctl, +#endif .owner = THIS_MODULE, }; -- 1.6.4.4 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 5/7] netfilter: ebtables: add CONFIG_COMPAT support 2010-02-05 1:43 ` [PATCH 5/7] netfilter: ebtables: add CONFIG_COMPAT support Florian Westphal @ 2010-02-05 13:52 ` Florian Westphal 2010-02-07 22:43 ` Florian Westphal 1 sibling, 0 replies; 21+ messages in thread From: Florian Westphal @ 2010-02-05 13:52 UTC (permalink / raw) To: netfilter-devel Florian Westphal <fw@strlen.de> wrote: > remaining problem: > > The ebtables userland makefile has: > ifeq ($(shell uname -m),sparc64) > CFLAGS+=-DEBT_MIN_ALIGN=8 -DKERNEL_64_USERSPACE_32 > endif > > struct ebt_replace, ebt_entry_match etc. then contain userland-side > padding -- this means, if this is actually used in production -- > that even if we are called from a 32 bit userland, the structures > may already be in the right format. > > One possible solution would be to try to handle compat_* calls via > the native handler and then fall back to the compat one if it fails, > but it will cause kernel warnings like: > kernel msg: ebtables bug: please report to author: Wrong size > > Thus this is not done here. > > I am not sure if this "userland-padding" is widely used; it looks > incomplete (eg. I couldn't find any > userland-padding for match/target structures; and some (eg. mark) > do need special handling). Just for the record: Jan Engelhardt was so kind to try "-j mark" with the 32bit sparc ebtables userland and reported that it is not working. So I am not sure if I should bother with supporting the "already padded" userland scenario, as it won't work in some cases anyway. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/7] netfilter: ebtables: add CONFIG_COMPAT support 2010-02-05 1:43 ` [PATCH 5/7] netfilter: ebtables: add CONFIG_COMPAT support Florian Westphal 2010-02-05 13:52 ` Florian Westphal @ 2010-02-07 22:43 ` Florian Westphal 1 sibling, 0 replies; 21+ messages in thread From: Florian Westphal @ 2010-02-07 22:43 UTC (permalink / raw) To: netfilter-devel Florian Westphal <fw@strlen.de> wrote: > From: Florian Westphal <fwestphal@astaro.com> > > Main code for 32 bit userland ebtables binary with 64 bit kernels > support. [..] > +static int compat_copy_everything_to_user(struct ebt_table *t, void __user *user, > + int *len, int cmd) [..] > + if (tmp.num_counters) { > + if (tmp.num_counters != tinfo.nentries) { > + BUGPRINT("Num_counters wrong\n"); > + return -EINVAL; > + } > + counterstmp = vmalloc(tinfo.nentries * sizeof(*counterstmp)); > + if (!counterstmp) > + return -ENOMEM; > + > + write_lock_bh(&t->lock); > + get_counters(oldcounters, counterstmp, tinfo.nentries); > + write_unlock_bh(&t->lock); > + > + buf = compat_ptr(tmp.counters); > + if (copy_to_user(buf, counterstmp, tinfo.nentries * sizeof(*counterstmp))) { > + vfree(counterstmp); > + return -EFAULT; > + } > + vfree(counterstmp); > + } This almost identical to the non-compat case; I'll move this to a common helper in the next iteration. Also, compat_do_ebt_get_ctl() does not have the required *len check. Will fix in next revision. ^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 6/7] netfilter: ebt_limit: add CONFIG_COMPAT support 2010-02-05 1:43 [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support Florian Westphal ` (4 preceding siblings ...) 2010-02-05 1:43 ` [PATCH 5/7] netfilter: ebtables: add CONFIG_COMPAT support Florian Westphal @ 2010-02-05 1:43 ` Florian Westphal 2010-02-05 1:43 ` [PATCH 7/7] netfilter: ebtables: mark: " Florian Westphal 2010-02-05 7:15 ` [PATCH 0/7] netfilter: ebtables: " Jan Engelhardt 7 siblings, 0 replies; 21+ messages in thread From: Florian Westphal @ 2010-02-05 1:43 UTC (permalink / raw) To: netfilter-devel ebt_limit structure is larger on 64 bit systems due to "long" type used in the (kernel-only) data section. Setting .compatsize is enough in this case, these values have no meaning in userspace. Signed-off-by: Florian Westphal <fwestphal@astaro.com> --- net/bridge/netfilter/ebt_limit.c | 16 ++++++++++++++++ 1 files changed, 16 insertions(+), 0 deletions(-) diff --git a/net/bridge/netfilter/ebt_limit.c b/net/bridge/netfilter/ebt_limit.c index 9dd16e6..7a81827 100644 --- a/net/bridge/netfilter/ebt_limit.c +++ b/net/bridge/netfilter/ebt_limit.c @@ -84,6 +84,19 @@ static bool ebt_limit_mt_check(const struct xt_mtchk_param *par) return true; } + +#ifdef CONFIG_COMPAT +/* + * no conversion function needed -- + * only avg/burst have meaningful values in userspace. + */ +struct ebt_compat_limit_info { + compat_uint_t avg, burst; + compat_ulong_t prev; + compat_uint_t credit, credit_cap, cost; +}; +#endif + static struct xt_match ebt_limit_mt_reg __read_mostly = { .name = "limit", .revision = 0, @@ -91,6 +104,9 @@ static struct xt_match ebt_limit_mt_reg __read_mostly = { .match = ebt_limit_mt, .checkentry = ebt_limit_mt_check, .matchsize = sizeof(struct ebt_limit_info), +#ifdef CONFIG_COMPAT + .compatsize = sizeof(struct ebt_compat_limit_info), +#endif .me = THIS_MODULE, }; -- 1.6.4.4 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 7/7] netfilter: ebtables: mark: add CONFIG_COMPAT support 2010-02-05 1:43 [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support Florian Westphal ` (5 preceding siblings ...) 2010-02-05 1:43 ` [PATCH 6/7] netfilter: ebt_limit: " Florian Westphal @ 2010-02-05 1:43 ` Florian Westphal 2010-02-05 7:15 ` [PATCH 0/7] netfilter: ebtables: " Jan Engelhardt 7 siblings, 0 replies; 21+ messages in thread From: Florian Westphal @ 2010-02-05 1:43 UTC (permalink / raw) To: netfilter-devel Add the required handlers to convert 32 bit ebtables mark match and match target structs to 64 bit layout. Signed-off-by: Florian Westphal <fwestphal@astaro.com> --- net/bridge/netfilter/ebt_mark.c | 31 +++++++++++++++++++++++++++++++ net/bridge/netfilter/ebt_mark_m.c | 37 +++++++++++++++++++++++++++++++++++++ 2 files changed, 68 insertions(+), 0 deletions(-) diff --git a/net/bridge/netfilter/ebt_mark.c b/net/bridge/netfilter/ebt_mark.c index 153e167..21404b8 100644 --- a/net/bridge/netfilter/ebt_mark.c +++ b/net/bridge/netfilter/ebt_mark.c @@ -52,6 +52,32 @@ static bool ebt_mark_tg_check(const struct xt_tgchk_param *par) return false; return true; } +#ifdef CONFIG_COMPAT +struct compat_ebt_mark_t_info { + compat_ulong_t mark; + compat_uint_t target; +}; + +static void mark_tg_compat_from_user(void *dst, void *src) +{ + const struct compat_ebt_mark_t_info *user = src; + struct ebt_mark_t_info *kern = dst; + + kern->mark = user->mark; + kern->target = user->target; +} + +static int mark_tg_compat_to_user(void __user *dst, void *src) +{ + struct compat_ebt_mark_t_info __user *user = dst; + const struct ebt_mark_t_info *kern = src; + + if (put_user(kern->mark, &user->mark) || + put_user(kern->target, &user->target)) + return -EFAULT; + return 0; +} +#endif static struct xt_target ebt_mark_tg_reg __read_mostly = { .name = "mark", @@ -60,6 +86,11 @@ static struct xt_target ebt_mark_tg_reg __read_mostly = { .target = ebt_mark_tg, .checkentry = ebt_mark_tg_check, .targetsize = sizeof(struct ebt_mark_t_info), +#ifdef CONFIG_COMPAT + .compatsize = sizeof(struct compat_ebt_mark_t_info), + .compat_from_user = mark_tg_compat_from_user, + .compat_to_user = mark_tg_compat_to_user, +#endif .me = THIS_MODULE, }; diff --git a/net/bridge/netfilter/ebt_mark_m.c b/net/bridge/netfilter/ebt_mark_m.c index 89abf40..ea89d95 100644 --- a/net/bridge/netfilter/ebt_mark_m.c +++ b/net/bridge/netfilter/ebt_mark_m.c @@ -35,6 +35,38 @@ static bool ebt_mark_mt_check(const struct xt_mtchk_param *par) return true; } + +#ifdef CONFIG_COMPAT +struct compat_ebt_mark_m_info { + compat_ulong_t mark, mask; + uint8_t invert, bitmask; +}; + +static void mark_mt_compat_from_user(void *dst, void *src) +{ + const struct compat_ebt_mark_m_info *user = src; + struct ebt_mark_m_info *kern = dst; + + kern->mark = user->mark; + kern->mask = user->mask; + kern->invert = user->invert; + kern->bitmask = user->bitmask; +} + +static int mark_mt_compat_to_user(void __user *dst, void *src) +{ + struct compat_ebt_mark_m_info __user *user = dst; + const struct ebt_mark_m_info *kern = src; + + if (put_user(kern->mark, &user->mark) || + put_user(kern->mask, &user->mask) || + put_user(kern->invert, &user->invert) || + put_user(kern->bitmask,&user->bitmask)) + return -EFAULT; + return 0; +} +#endif + static struct xt_match ebt_mark_mt_reg __read_mostly = { .name = "mark_m", .revision = 0, @@ -42,6 +74,11 @@ static struct xt_match ebt_mark_mt_reg __read_mostly = { .match = ebt_mark_mt, .checkentry = ebt_mark_mt_check, .matchsize = sizeof(struct ebt_mark_m_info), +#ifdef CONFIG_COMPAT + .compatsize = sizeof(struct compat_ebt_mark_m_info), + .compat_from_user = mark_mt_compat_from_user, + .compat_to_user = mark_mt_compat_to_user, +#endif .me = THIS_MODULE, }; -- 1.6.4.4 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support 2010-02-05 1:43 [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support Florian Westphal ` (6 preceding siblings ...) 2010-02-05 1:43 ` [PATCH 7/7] netfilter: ebtables: mark: " Florian Westphal @ 2010-02-05 7:15 ` Jan Engelhardt 2010-02-05 14:00 ` Florian Westphal 7 siblings, 1 reply; 21+ messages in thread From: Jan Engelhardt @ 2010-02-05 7:15 UTC (permalink / raw) To: Florian Westphal; +Cc: netfilter-devel On Friday 2010-02-05 02:43, Florian Westphal wrote: >Hello netfilter team, > >following patch series adds CONFIG_COMPAT >support to ebtables (unfortunately tested on x86_64 only). (Good thing I acquired that sparc64! Now if only the virtualization was usable with Linux-only...;) In a quick look, ebtables 2.0.8-2 seems to just work there in both 32/64 and 64/64 [U/K-bitness] combinations: # make CC="gcc -m64" LD="ld -melf64_sparc" -j24 ... # LD_LIBRARY_PATH=.:extensions ./ebtables -A INPUT -p 0x0800 --ip-source 1.2.3.4 # LD_LIBRARY_PATH=.:extensions ./ebtables -A INPUT -p 0x0800 --ip-source 1.2.3.4 # LD_LIBRARY_PATH=.:extensions ./ebtables -L Bridge table: filter Bridge chain: INPUT, entries: 2, policy: ACCEPT -p 0x800 --ip-src 1.2.3.4 -j CONTINUE -p 0x800 --ip-src 1.2.3.4 -j CONTINUE Bridge chain: FORWARD, entries: 0, policy: ACCEPT Bridge chain: OUTPUT, entries: 0, policy: ACCEPT # file ebtables ebtables: ELF 64-bit MSB executable, SPARC V9, relaxed memory ordering, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.4, not stripped and # make clean; make -j24 ... # LD_LIBRARY_PATH=.:extensions ./ebtables -L Bridge table: filter Bridge chain: INPUT, entries: 2, policy: ACCEPT -p 0x800 --ip-src 1.2.3.4 -j CONTINUE -p 0x800 --ip-src 1.2.3.4 -j CONTINUE Bridge chain: FORWARD, entries: 0, policy: ACCEPT Bridge chain: OUTPUT, entries: 0, policy: ACCEPT # LD_LIBRARY_PATH=.:extensions ./ebtables -A INPUT -p 0x0800 --ip-source 1.2.3.5 # LD_LIBRARY_PATH=.:extensions ./ebtables -L Bridge table: filter Bridge chain: INPUT, entries: 3, policy: ACCEPT -p 0x800 --ip-src 1.2.3.4 -j CONTINUE -p 0x800 --ip-src 1.2.3.4 -j CONTINUE -p 0x800 --ip-src 1.2.3.5 -j CONTINUE Bridge chain: FORWARD, entries: 0, policy: ACCEPT Bridge chain: OUTPUT, entries: 0, policy: ACCEPT # file ebtables ebtables: ELF 32-bit MSB executable, SPARC32PLUS, V8+ Required, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.4, not stripped So I wonder whether extra patches are really needed for x86_64. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support 2010-02-05 7:15 ` [PATCH 0/7] netfilter: ebtables: " Jan Engelhardt @ 2010-02-05 14:00 ` Florian Westphal 2010-02-05 17:15 ` Bart De Schuymer 0 siblings, 1 reply; 21+ messages in thread From: Florian Westphal @ 2010-02-05 14:00 UTC (permalink / raw) To: netfilter-devel Jan Engelhardt <jengelh@medozas.de> wrote: > In a quick look, ebtables 2.0.8-2 seems to just work there in both > 32/64 and 64/64 [U/K-bitness] combinations: [..] > > So I wonder whether extra patches are really needed for x86_64. I did investigate the "lets pad the structures in userspace" scenario. It has two problems. First of all, support for a few targets/matches is missing. Second, the point is to run unmodified binaries with either a 32 or 64 bit kernel without recompiling. Solving this transparently (i.e. same binary) would thus require a run-time check for the kernel architecture before the set/getsockopt can be made. As far as I could see this would require a lot of changes to the ebtables userland code base, too. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support 2010-02-05 14:00 ` Florian Westphal @ 2010-02-05 17:15 ` Bart De Schuymer 2010-02-05 18:02 ` David Miller 2010-02-05 19:53 ` Florian Westphal 0 siblings, 2 replies; 21+ messages in thread From: Bart De Schuymer @ 2010-02-05 17:15 UTC (permalink / raw) To: Florian Westphal; +Cc: netfilter-devel Hello Florian, First of all, thanks for pointing out these problems. Florian Westphal schreef: > Jan Engelhardt <jengelh@medozas.de> wrote: > >> In a quick look, ebtables 2.0.8-2 seems to just work there in both >> 32/64 and 64/64 [U/K-bitness] combinations: >> > [..] > >> So I wonder whether extra patches are really needed for x86_64. >> > > I did investigate the "lets pad the structures in userspace" scenario. > It has two problems. > > First of all, support for a few targets/matches is missing. > > I would certainly accept patches to the userland code to fix these remaining problems that I was unaware of. I don't have access to such a machine. > Second, the point is to run unmodified binaries with either a 32 > or 64 bit kernel without recompiling. > > Solving this transparently (i.e. same binary) would thus > require a run-time check for the kernel architecture before the > set/getsockopt can be made. > > As far as I could see this would require a lot of changes to the ebtables > userland code base, too. > Could you elaborate why having to distribute two compiled versions of ebtables for different platform configurations is more overhead than the overhead added by your kernel patches? Can't you decide on installation time of the Linux system which binary to provide? At this time I am more in favor of fixing userland... cheers, Bart -- Bart De Schuymer www.artinalgorithms.be ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support 2010-02-05 17:15 ` Bart De Schuymer @ 2010-02-05 18:02 ` David Miller 2010-02-06 13:08 ` Bart De Schuymer 2010-02-05 19:53 ` Florian Westphal 1 sibling, 1 reply; 21+ messages in thread From: David Miller @ 2010-02-05 18:02 UTC (permalink / raw) To: bdschuym; +Cc: fw, netfilter-devel From: Bart De Schuymer <bdschuym@pandora.be> Date: Fri, 05 Feb 2010 18:15:29 +0100 > At this time I am more in favor of fixing userland... Compatability issues between 32-bit and 64-bit interfaces when running on a 64-bit kernel should _always_ be handled in the kernel via the compat layer. Every attempt to do this in userspace somehow been ugly and has utterly failed. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support 2010-02-05 18:02 ` David Miller @ 2010-02-06 13:08 ` Bart De Schuymer 2010-02-06 13:50 ` Jan Engelhardt 2010-02-07 22:38 ` Florian Westphal 0 siblings, 2 replies; 21+ messages in thread From: Bart De Schuymer @ 2010-02-06 13:08 UTC (permalink / raw) To: David Miller; +Cc: fw, netfilter-devel David Miller schreef: > From: Bart De Schuymer <bdschuym@pandora.be> > Date: Fri, 05 Feb 2010 18:15:29 +0100 > > >> At this time I am more in favor of fixing userland... >> > > Compatability issues between 32-bit and 64-bit interfaces > when running on a 64-bit kernel should _always_ be handled > in the kernel via the compat layer. > > Every attempt to do this in userspace somehow been ugly and has > utterly failed. > I actually think the solution inside the ebtables userspace code is very simple and elegant. The extra code for userspace to fix the few ebtables matches that still have problems would be simple too. If I'd have known about this mandatory compat layer 7 years ago I wouldn't have bothered trying to fix it in userspace. A simple kernel patch of mine to get ebtables userspace running on a user32/kernel64 system was accepted by you back in 2003 (http://osdir.com/ml/linux.network.bridge.ebtables.devel/2003-07/msg00028.html). I can't tell how many of these systems are running ebtables but considering most of the functionality has been working on a user32/kernel64 system since 2004 I think it's safe to say that Florian's patch will break a few systems. Apart from this fact I have no objection to Florian's patch, although it seems strange that such a simple solution in userspace needs a large additional codebase in the kernel... I'm not familiar with the way this compat layer works, but is there a standard way to ensure that old ebtables binaries don't use the compat layer, while a new version of the userspace program would? If all else fails, we could do this with a special flag in struct ebt_replace::valid_hooks, as this member has some unused bits. cheers, Bart -- Bart De Schuymer www.artinalgorithms.be ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support 2010-02-06 13:08 ` Bart De Schuymer @ 2010-02-06 13:50 ` Jan Engelhardt 2010-02-07 22:38 ` Florian Westphal 1 sibling, 0 replies; 21+ messages in thread From: Jan Engelhardt @ 2010-02-06 13:50 UTC (permalink / raw) To: Bart De Schuymer; +Cc: David Miller, fw, netfilter-devel On Saturday 2010-02-06 14:08, Bart De Schuymer wrote: >David Miller schreef: >> >>> At this time I am more in favor of fixing userland... >> >> Compatability issues between 32-bit and 64-bit interfaces >> when running on a 64-bit kernel should _always_ be handled >> in the kernel via the compat layer. >> >> Every attempt to do this in userspace somehow been ugly and has >> utterly failed. > >I actually think the solution inside the ebtables userspace code is very >simple and elegant. The extra code for userspace to fix the few ebtables >matches that still have problems would be simple too. >If I'd have known about this mandatory compat layer 7 years ago I >wouldn't have bothered trying to fix it in userspace. A simple kernel >patch of mine to get ebtables userspace running on a user32/kernel64 >system was accepted by you back in 2003 >(http://osdir.com/ml/linux.network.bridge.ebtables.devel/2003-07/msg00028.html). >I can't tell how many of these systems are running ebtables but >considering most of the functionality has been working on a >user32/kernel64 system since 2004 I think it's safe to say that >Florian's patch will break a few systems. Apart from this fact I have no >objection to Florian's patch, although it seems strange that such a >simple solution in userspace needs a large additional codebase in the >kernel... >I'm not familiar with the way this compat layer works, but is there a >standard way to ensure that old ebtables binaries don't use the compat >layer, while a new version of the userspace program would? Xtables uses revisions, and extensions with problematic layouts use match->compat_from_user that is non-NULL. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support 2010-02-06 13:08 ` Bart De Schuymer 2010-02-06 13:50 ` Jan Engelhardt @ 2010-02-07 22:38 ` Florian Westphal 2010-02-07 23:19 ` Bart De Schuymer 1 sibling, 1 reply; 21+ messages in thread From: Florian Westphal @ 2010-02-07 22:38 UTC (permalink / raw) To: Bart De Schuymer; +Cc: David Miller, netfilter-devel Bart De Schuymer <bdschuym@pandora.be> wrote: > I can't tell how many of these systems are running ebtables but > considering most of the functionality has been working on a > user32/kernel64 system since 2004 I think it's safe to say that > Florian's patch will break a few systems. Yes, I had feared that. The way I see it we have a couple of alternatives. One way would be to disable the in-kernel ebt-compat layer for sparc64. Of course this assumes that this userland-side padding is only used on the sparc platform. > I'm not familiar with the way this compat layer works, but is there a > standard way to ensure that old ebtables binaries don't use the compat > layer, while a new version of the userspace program would? It should be possible to figure out if we need to do fixups, because struct ebt_replace size differs (and the sockopt *len includes this value). In some cases this is as simple as "if (*len == sizeof(struct ebt_replace)". But even in case of EBT_SO_GET_ENTRIES it seems possible. We can try the "native" getsockopt call first, it should then error out early due to this check: if (*len != sizeof(struct ebt_replace) + entries_size + (tmp.num_counters? nentries * sizeof(struct ebt_counter): 0)) { sizeof(struct ebt_counter): 0)) { /* -EINVAL */ we can then re-try with all the compat fixups. What do you think? I'll do some tests with this in the next couple of days; if everything looks good (and there are no objections) I'll post a revised patch set. > If all else fails, we could do this with a special flag in struct > ebt_replace::valid_hooks, as this member has some unused bits. Thanks, its good to know there is some kind of "last resort". But I'd like to avoid any incompatibilities, if possible. Regards, Florian ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support 2010-02-07 22:38 ` Florian Westphal @ 2010-02-07 23:19 ` Bart De Schuymer 2010-02-07 23:28 ` Florian Westphal 0 siblings, 1 reply; 21+ messages in thread From: Bart De Schuymer @ 2010-02-07 23:19 UTC (permalink / raw) To: Florian Westphal; +Cc: David Miller, netfilter-devel Florian Westphal schreef: > Bart De Schuymer <bdschuym@pandora.be> wrote: > >> I can't tell how many of these systems are running ebtables but >> considering most of the functionality has been working on a >> user32/kernel64 system since 2004 I think it's safe to say that >> Florian's patch will break a few systems. >> > > Yes, I had feared that. > The way I see it we have a couple of alternatives. > > One way would be to disable the in-kernel ebt-compat layer for > sparc64. Of course this assumes that this userland-side padding > is only used on the sparc platform. > > >> I'm not familiar with the way this compat layer works, but is there a >> standard way to ensure that old ebtables binaries don't use the compat >> layer, while a new version of the userspace program would? >> > > It should be possible to figure out if we need to do fixups, > because struct ebt_replace size differs (and the sockopt *len > includes this value). > > In some cases this is as simple as "if (*len == sizeof(struct > ebt_replace)". But even in case of EBT_SO_GET_ENTRIES it seems > possible. We can try the "native" getsockopt call first, it should > then error out early due to this check: > > if (*len != sizeof(struct ebt_replace) + entries_size + > (tmp.num_counters? nentries * sizeof(struct ebt_counter): 0)) { > sizeof(struct ebt_counter): 0)) { > /* -EINVAL */ > > we can then re-try with all the compat fixups. > > What do you think? > > I think it's best to have the compat code in the fast path, since that's the code that will be needed in the future on those machines (I'll make the userspace workaround a compile-time option that is turned off by default in the next release of ebtables). Shouldn't it be possible for the compat code to know when a certain wrong size is due to the userspace workaround? If that situation is the case, then the compat code can execute the "native" getsockopt instead of producing a kernel error message... cheers, Bart -- Bart De Schuymer www.artinalgorithms.be ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support 2010-02-07 23:19 ` Bart De Schuymer @ 2010-02-07 23:28 ` Florian Westphal 0 siblings, 0 replies; 21+ messages in thread From: Florian Westphal @ 2010-02-07 23:28 UTC (permalink / raw) To: Bart De Schuymer; +Cc: David Miller, netfilter-devel Bart De Schuymer <bdschuym@pandora.be> wrote: > I think it's best to have the compat code in the fast path, since that's > the code that will be needed in the future on those machines (I'll make > the userspace workaround a compile-time option that is turned off by > default in the next release of ebtables). > Shouldn't it be possible for the compat code to know when a certain > wrong size is due to the userspace workaround? > If that situation is the > case, then the compat code can execute the "native" getsockopt instead > of producing a kernel error message... This is what I plan to do in the next revision. I'll send this particular change as a patch of its own on top of the main compat hunk to ease review. Thanks, Florian ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support 2010-02-05 17:15 ` Bart De Schuymer 2010-02-05 18:02 ` David Miller @ 2010-02-05 19:53 ` Florian Westphal 2010-02-05 20:07 ` Jan Engelhardt 1 sibling, 1 reply; 21+ messages in thread From: Florian Westphal @ 2010-02-05 19:53 UTC (permalink / raw) To: Bart De Schuymer; +Cc: netfilter-devel Bart De Schuymer <bdschuym@pandora.be> wrote: > Florian Westphal schreef: > > Jan Engelhardt <jengelh@medozas.de> wrote: > > > >> In a quick look, ebtables 2.0.8-2 seems to just work there in both > >> 32/64 and 64/64 [U/K-bitness] combinations: > >> > > [..] > > > >> So I wonder whether extra patches are really needed for x86_64. > >> > > > > I did investigate the "lets pad the structures in userspace" scenario. > > It has two problems. > > > > First of all, support for a few targets/matches is missing. > > > I would certainly accept patches to the userland code to fix these > remaining problems that I was unaware of. I don't have access to such a > machine. If these CONFIG_COMPAT patches are not accepted I'll send userspace patches for those targets/matches that I am aware of. > > Second, the point is to run unmodified binaries with either a 32 > > or 64 bit kernel without recompiling. > > > > Solving this transparently (i.e. same binary) would thus > > require a run-time check for the kernel architecture before the > > set/getsockopt can be made. > > > > As far as I could see this would require a lot of changes to the ebtables > > userland code base, too. > > > Could you elaborate why having to distribute two compiled versions of > ebtables for different platform configurations is more overhead than the > overhead added by your kernel patches? I just believe that userspace should not have to care about this kind of issue, thats all. > Can't you decide on installation > time of the Linux system which binary to provide? It should still work when the kernel is changed later. The 'easiest' solution would be to ship both 64 and 32bit binaries and have some wrapper that picks the right one depending on kernel. But this has other issues (e.g. need for 64bit libc) or, when using a statically linked binary, the drawback that you need to use e.g. "6" instead of "tcp" due to missing libraries glibc wants to load at run time. But to be honest to me that cures the symptoms (userspace binary does not work) and not the problem (kernel ABI incompatibility). Regards, Florian ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support 2010-02-05 19:53 ` Florian Westphal @ 2010-02-05 20:07 ` Jan Engelhardt 0 siblings, 0 replies; 21+ messages in thread From: Jan Engelhardt @ 2010-02-05 20:07 UTC (permalink / raw) To: Florian Westphal; +Cc: Bart De Schuymer, netfilter-devel On Friday 2010-02-05 20:53, Florian Westphal wrote: > >> Can't you decide on installation >> time of the Linux system which binary to provide? > >It should still work when the kernel is changed later. > >The 'easiest' solution would be to ship both 64 and 32bit binaries >and have some wrapper that picks the right one depending on kernel. Why ship two? There is a preferred userspace for each kernel architecture (that I have seen). If you try to install ebtables on a x86_64 distro, it generally offers you ebtables.x86_64. >But this has other issues (e.g. need for 64bit libc) or, >when using a statically linked binary, the drawback that you need to use >e.g. "6" instead of "tcp" due to missing libraries glibc wants to load >at run time. Anything that is dlopened is not strictly required for execution. ^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2010-02-07 23:28 UTC | newest] Thread overview: 21+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2010-02-05 1:43 [PATCH 0/7] netfilter: ebtables: CONFIG_COMPAT support Florian Westphal 2010-02-05 1:43 ` [PATCH 1/7] netfilter: ebtables: abort if next_offset is too small Florian Westphal 2010-02-05 1:43 ` [PATCH 2/7] netfilter: ebtables: avoid explicit XT_ALIGN() in match/targets Florian Westphal 2010-02-05 1:43 ` [PATCH 3/7] netfilter: CONFIG_COMPAT: allow delta to exceed 32767 Florian Westphal 2010-02-05 1:43 ` [PATCH 4/7] netfilter: ebtables: split do_replace into two functions Florian Westphal 2010-02-05 1:43 ` [PATCH 5/7] netfilter: ebtables: add CONFIG_COMPAT support Florian Westphal 2010-02-05 13:52 ` Florian Westphal 2010-02-07 22:43 ` Florian Westphal 2010-02-05 1:43 ` [PATCH 6/7] netfilter: ebt_limit: " Florian Westphal 2010-02-05 1:43 ` [PATCH 7/7] netfilter: ebtables: mark: " Florian Westphal 2010-02-05 7:15 ` [PATCH 0/7] netfilter: ebtables: " Jan Engelhardt 2010-02-05 14:00 ` Florian Westphal 2010-02-05 17:15 ` Bart De Schuymer 2010-02-05 18:02 ` David Miller 2010-02-06 13:08 ` Bart De Schuymer 2010-02-06 13:50 ` Jan Engelhardt 2010-02-07 22:38 ` Florian Westphal 2010-02-07 23:19 ` Bart De Schuymer 2010-02-07 23:28 ` Florian Westphal 2010-02-05 19:53 ` Florian Westphal 2010-02-05 20:07 ` Jan Engelhardt
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).