* [PATCH 1/1] vxlan: insert ipv6 macro
@ 2016-10-11 8:23 zyjzyj2000
2016-10-11 14:06 ` Jiri Benc
0 siblings, 1 reply; 6+ messages in thread
From: zyjzyj2000 @ 2016-10-11 8:23 UTC (permalink / raw)
To: netdev, pabeni, daniel, pshelar, aduyck, hannes, jbenc, davem,
zyjzyj2000
From: Zhu Yanjun <zyjzyj2000@gmail.com>
The source code is related with ipv6. As such, it is better to insert
ipv6 macro.
Signed-off-by: Zhu Yanjun <zyjzyj2000@gmail.com>
---
drivers/net/vxlan.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
index e7d1668..9af6600 100644
--- a/drivers/net/vxlan.c
+++ b/drivers/net/vxlan.c
@@ -2647,15 +2647,15 @@ static struct socket *vxlan_create_sock(struct net *net, bool ipv6,
int err;
memset(&udp_conf, 0, sizeof(udp_conf));
-
+#if IS_ENABLED(CONFIG_IPV6)
if (ipv6) {
udp_conf.family = AF_INET6;
udp_conf.use_udp6_rx_checksums =
!(flags & VXLAN_F_UDP_ZERO_CSUM6_RX);
udp_conf.ipv6_v6only = 1;
- } else {
+ } else
+#endif
udp_conf.family = AF_INET;
- }
udp_conf.local_udp_port = port;
--
2.7.4
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 1/1] vxlan: insert ipv6 macro
2016-10-11 8:23 [PATCH 1/1] vxlan: insert ipv6 macro zyjzyj2000
@ 2016-10-11 14:06 ` Jiri Benc
2016-10-12 13:01 ` zhuyj
0 siblings, 1 reply; 6+ messages in thread
From: Jiri Benc @ 2016-10-11 14:06 UTC (permalink / raw)
To: zyjzyj2000; +Cc: netdev, pabeni, daniel, pshelar, aduyck, hannes, davem
On Tue, 11 Oct 2016 16:23:31 +0800, zyjzyj2000@gmail.com wrote:
> --- a/drivers/net/vxlan.c
> +++ b/drivers/net/vxlan.c
> @@ -2647,15 +2647,15 @@ static struct socket *vxlan_create_sock(struct net *net, bool ipv6,
> int err;
>
> memset(&udp_conf, 0, sizeof(udp_conf));
> -
> +#if IS_ENABLED(CONFIG_IPV6)
> if (ipv6) {
> udp_conf.family = AF_INET6;
> udp_conf.use_udp6_rx_checksums =
> !(flags & VXLAN_F_UDP_ZERO_CSUM6_RX);
> udp_conf.ipv6_v6only = 1;
> - } else {
> + } else
> +#endif
> udp_conf.family = AF_INET;
> - }
Zhu Yanjun, before posting patches such as the previous ones or
this one, please test whether they make any difference. In this case,
try to compile the code with IPv6 disabled before and after this patch,
disassemble and compare the results. You'll see that this patch is
pointless.
It's pretty obvious from the code but to be really sure, I've just
quickly built the vxlan module with IPv6 disabled. And indeed, as
expected, the compiler just inlined everything into vxlan_open. The
whole chain vxlan_open -> vxlan_sock_add -> __vxlan_sock_add (note that
there's only a single caller of __vxlan_sock_add with IPv6 disabled) ->
vxlan_socket_create -> vxlan_create_sock is inlined.
It also means the code in the "if (ipv6)" branch is completely
eliminated by the compiler even without ugly #ifdefs.
Jiri
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/1] vxlan: insert ipv6 macro
2016-10-11 14:06 ` Jiri Benc
@ 2016-10-12 13:01 ` zhuyj
2016-10-12 13:16 ` Jiri Benc
0 siblings, 1 reply; 6+ messages in thread
From: zhuyj @ 2016-10-12 13:01 UTC (permalink / raw)
To: Jiri Benc
Cc: netdev, pabeni, daniel, Pravin B Shelar, Alexander Duyck, hannes,
David S. Miller
Hi, Jiri
How to explain the following source code? As you mentioned, are the
#ifdefs in the following source pointless?
As to the previous patch, I will compile and analyze it. But now I am
busy with something else. After I draw a conclusion, I will let you
know.
Thanks for your reply.
static void vxlan_sock_release(struct vxlan_dev *vxlan)
{
bool ipv4 = __vxlan_sock_release_prep(vxlan->vn4_sock);
#if IS_ENABLED(CONFIG_IPV6)
bool ipv6 = __vxlan_sock_release_prep(vxlan->vn6_sock);
#endif
synchronize_net();
if (ipv4) {
udp_tunnel_sock_release(vxlan->vn4_sock->sock);
kfree(vxlan->vn4_sock);
}
#if IS_ENABLED(CONFIG_IPV6)
if (ipv6) {
udp_tunnel_sock_release(vxlan->vn6_sock->sock);
kfree(vxlan->vn6_sock);
}
#endif
}
On Tue, Oct 11, 2016 at 10:06 PM, Jiri Benc <jbenc@redhat.com> wrote:
> On Tue, 11 Oct 2016 16:23:31 +0800, zyjzyj2000@gmail.com wrote:
>> --- a/drivers/net/vxlan.c
>> +++ b/drivers/net/vxlan.c
>> @@ -2647,15 +2647,15 @@ static struct socket *vxlan_create_sock(struct net *net, bool ipv6,
>> int err;
>>
>> memset(&udp_conf, 0, sizeof(udp_conf));
>> -
>> +#if IS_ENABLED(CONFIG_IPV6)
>> if (ipv6) {
>> udp_conf.family = AF_INET6;
>> udp_conf.use_udp6_rx_checksums =
>> !(flags & VXLAN_F_UDP_ZERO_CSUM6_RX);
>> udp_conf.ipv6_v6only = 1;
>> - } else {
>> + } else
>> +#endif
>> udp_conf.family = AF_INET;
>> - }
>
> Zhu Yanjun, before posting patches such as the previous ones or
> this one, please test whether they make any difference. In this case,
> try to compile the code with IPv6 disabled before and after this patch,
> disassemble and compare the results. You'll see that this patch is
> pointless.
>
> It's pretty obvious from the code but to be really sure, I've just
> quickly built the vxlan module with IPv6 disabled. And indeed, as
> expected, the compiler just inlined everything into vxlan_open. The
> whole chain vxlan_open -> vxlan_sock_add -> __vxlan_sock_add (note that
> there's only a single caller of __vxlan_sock_add with IPv6 disabled) ->
> vxlan_socket_create -> vxlan_create_sock is inlined.
>
> It also means the code in the "if (ipv6)" branch is completely
> eliminated by the compiler even without ugly #ifdefs.
>
> Jiri
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/1] vxlan: insert ipv6 macro
2016-10-12 13:01 ` zhuyj
@ 2016-10-12 13:16 ` Jiri Benc
2016-10-13 5:28 ` zhuyj
0 siblings, 1 reply; 6+ messages in thread
From: Jiri Benc @ 2016-10-12 13:16 UTC (permalink / raw)
To: zhuyj
Cc: netdev, pabeni, daniel, Pravin B Shelar, Alexander Duyck, hannes,
David S. Miller
On Wed, 12 Oct 2016 21:01:54 +0800, zhuyj wrote:
> How to explain the following source code? As you mentioned, are the
> #ifdefs in the following source pointless?
They are not, the code would not compile without them. Look how struct
vxlan_dev is defined.
Those are really basic questions you have. I suggest you try yourself
before asking such questions next time. In this case, you could
trivially remove the #ifdef and see for yourself, as I explained in the
previous email. Please do not try to offload your homework to other
people. It's very obvious you didn't even try to understand this, even
after the feedback you received.
And do not top post.
Thanks,
Jiri
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/1] vxlan: insert ipv6 macro
2016-10-12 13:16 ` Jiri Benc
@ 2016-10-13 5:28 ` zhuyj
2016-10-13 5:30 ` zhuyj
0 siblings, 1 reply; 6+ messages in thread
From: zhuyj @ 2016-10-13 5:28 UTC (permalink / raw)
To: Jiri Benc
Cc: netdev, pabeni, daniel, Pravin B Shelar, Alexander Duyck, hannes,
David S. Miller
[-- Attachment #1: Type: text/plain, Size: 939 bytes --]
Hi, Jiri
The dumped source code is in the attachment. Please check it. I think
this file can explain all.
If anything, please just let me know.
Thanks a lot.
On Wed, Oct 12, 2016 at 9:16 PM, Jiri Benc <jbenc@redhat.com> wrote:
> On Wed, 12 Oct 2016 21:01:54 +0800, zhuyj wrote:
>> How to explain the following source code? As you mentioned, are the
>> #ifdefs in the following source pointless?
>
> They are not, the code would not compile without them. Look how struct
> vxlan_dev is defined.
>
> Those are really basic questions you have. I suggest you try yourself
> before asking such questions next time. In this case, you could
> trivially remove the #ifdef and see for yourself, as I explained in the
> previous email. Please do not try to offload your homework to other
> people. It's very obvious you didn't even try to understand this, even
> after the feedback you received.
>
> And do not top post.
>
> Thanks,
>
> Jiri
[-- Attachment #2: dump-vxlan.txt --]
[-- Type: text/plain, Size: 602468 bytes --]
vxlan.ko: file format elf64-x86-64
Disassembly of section .text:
0000000000000000 <__vxlan_find_mac>:
}
/* Look up Ethernet address in forwarding table */
static struct vxlan_fdb *__vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
0: e8 00 00 00 00 callq 5 <__vxlan_find_mac+0x5>
5: 55 push %rbp
6: 48 8b 16 mov (%rsi),%rdx
9: 48 b8 eb 83 b5 80 46 movabs $0x61c8864680b583eb,%rax
10: 86 c8 61
13: 48 89 e5 mov %rsp,%rbp
16: 48 c1 e2 10 shl $0x10,%rdx
1a: 48 0f af c2 imul %rdx,%rax
1e: 48 c1 e8 38 shr $0x38,%rax
22: 48 8d 84 c7 60 01 00 lea 0x160(%rdi,%rax,8),%rax
29: 00
2a: 48 8b 00 mov (%rax),%rax
2d: 48 85 c0 test %rax,%rax
30: 74 22 je 54 <__vxlan_find_mac+0x54>
32: 8b 3e mov (%rsi),%edi
34: 0f b7 76 04 movzwl 0x4(%rsi),%esi
struct hlist_head *head = vxlan_fdb_head(vxlan, mac);
struct vxlan_fdb *f;
hlist_for_each_entry_rcu(f, head, hlist) {
38: 89 f2 mov %esi,%edx
3a: 66 33 50 44 xor 0x44(%rax),%dx
3e: 8b 48 40 mov 0x40(%rax),%ecx
41: 31 f9 xor %edi,%ecx
}
/* Look up Ethernet address in forwarding table */
static struct vxlan_fdb *__vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
43: 0f b7 d2 movzwl %dx,%edx
struct hlist_head *head = vxlan_fdb_head(vxlan, mac);
struct vxlan_fdb *f;
hlist_for_each_entry_rcu(f, head, hlist) {
46: 09 d1 or %edx,%ecx
48: 74 08 je 52 <__vxlan_find_mac+0x52>
4a: 48 8b 00 mov (%rax),%rax
4d: 48 85 c0 test %rax,%rax
50: 75 e6 jne 38 <__vxlan_find_mac+0x38>
52: 5d pop %rbp
53: c3 retq
54: 31 c0 xor %eax,%eax
56: 5d pop %rbp
57: c3 retq
58: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
5f: 00
0000000000000060 <vxlan_set_multicast_list>:
60: e8 00 00 00 00 callq 65 <vxlan_set_multicast_list+0x5>
65: 55 push %rbp
66: 48 89 e5 mov %rsp,%rbp
if (ether_addr_equal(mac, f->eth_addr))
69: 5d pop %rbp
6a: c3 retq
6b: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
0000000000000070 <vxlan_get_size>:
70: e8 00 00 00 00 callq 75 <vxlan_get_size+0x5>
75: 55 push %rbp
76: b8 c8 00 00 00 mov $0xc8,%eax
})
static __always_inline
void __read_once_size(const volatile void *p, void *res, int size)
{
__READ_ONCE_SIZE;
7b: 48 89 e5 mov %rsp,%rbp
const u8 *mac)
{
struct hlist_head *head = vxlan_fdb_head(vxlan, mac);
struct vxlan_fdb *f;
hlist_for_each_entry_rcu(f, head, hlist) {
7e: 5d pop %rbp
7f: c3 retq
0000000000000080 <vxlan_get_link_net>:
80: e8 00 00 00 00 callq 85 <vxlan_get_link_net+0x5>
if (ether_addr_equal(mac, f->eth_addr))
return f;
}
return NULL;
85: 55 push %rbp
}
86: 48 8b 87 78 08 00 00 mov 0x878(%rdi),%rax
8d: 48 89 e5 mov %rsp,%rbp
return ret;
}
/* Stub, nothing needs to be done. */
static void vxlan_set_multicast_list(struct net_device *dev)
{
90: 5d pop %rbp
91: c3 retq
92: 0f 1f 40 00 nopl 0x0(%rax)
96: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
9d: 00 00 00
00000000000000a0 <vxlan_init_net>:
list_del(&vxlan->next);
unregister_netdevice_queue(dev, head);
}
static size_t vxlan_get_size(const struct net_device *dev)
{
a0: e8 00 00 00 00 callq a5 <vxlan_init_net+0x5>
a5: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # ab <vxlan_init_net+0xb>
ab: 48 8b 97 88 14 00 00 mov 0x1488(%rdi),%rdx
nla_put_failure:
return -EMSGSIZE;
}
static struct net *vxlan_get_link_net(const struct net_device *dev)
{
b2: 55 push %rbp
b3: 83 e8 01 sub $0x1,%eax
struct vxlan_dev *vxlan = netdev_priv(dev);
return vxlan->net;
b6: 48 89 e5 mov %rsp,%rbp
b9: 48 98 cltq
bb: 48 8b 54 c2 18 mov 0x18(%rdx,%rax,8),%rdx
}
c0: 48 89 12 mov %rdx,(%rdx)
c3: 48 89 52 08 mov %rdx,0x8(%rdx)
c7: 48 8d 42 10 lea 0x10(%rdx),%rax
cb: c7 82 10 08 00 00 00 movl $0x0,0x810(%rdx)
d2: 00 00 00
struct net_generic *ng;
void *ptr;
rcu_read_lock();
ng = rcu_dereference(net->gen);
ptr = ng->ptr[id - 1];
d5: 48 81 c2 10 08 00 00 add $0x810,%rdx
dc: 48 c7 00 00 00 00 00 movq $0x0,(%rax)
e3: 48 83 c0 08 add $0x8,%rax
static struct notifier_block vxlan_notifier_block __read_mostly = {
.notifier_call = vxlan_netdevice_event,
};
static __net_init int vxlan_init_net(struct net *net)
{
e7: 48 39 d0 cmp %rdx,%rax
ea: 75 f0 jne dc <vxlan_init_net+0x3c>
ec: 31 c0 xor %eax,%eax
ee: 5d pop %rbp
ef: c3 retq
00000000000000f0 <vxlan_find_sock>:
{
switch (size) {
case 1: *(volatile __u8 *)p = *(__u8 *)res; break;
case 2: *(volatile __u16 *)p = *(__u16 *)res; break;
case 4: *(volatile __u32 *)p = *(__u32 *)res; break;
case 8: *(volatile __u64 *)p = *(__u64 *)res; break;
f0: e8 00 00 00 00 callq f5 <vxlan_find_sock+0x5>
struct list_head name = LIST_HEAD_INIT(name)
static inline void INIT_LIST_HEAD(struct list_head *list)
{
WRITE_ONCE(list->next, list);
list->prev = list;
f5: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # fb <vxlan_find_sock+0xb>
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
unsigned int h;
INIT_LIST_HEAD(&vn->vxlan_list);
spin_lock_init(&vn->sock_lock);
fb: 55 push %rbp
fc: 41 89 d0 mov %edx,%r8d
ff: 4c 8b 8f 88 14 00 00 mov 0x1488(%rdi),%r9
106: 66 c1 c2 08 rol $0x8,%dx
10a: 81 e1 00 7d 00 00 and $0x7d00,%ecx
for (h = 0; h < PORT_HASH_SIZE; ++h)
INIT_HLIST_HEAD(&vn->sock_list[h]);
110: 48 89 e5 mov %rsp,%rbp
113: 8d 78 ff lea -0x1(%rax),%edi
116: 0f b7 c2 movzwl %dx,%eax
unsigned int h;
INIT_LIST_HEAD(&vn->vxlan_list);
spin_lock_init(&vn->sock_lock);
for (h = 0; h < PORT_HASH_SIZE; ++h)
119: 69 c0 47 86 c8 61 imul $0x61c88647,%eax,%eax
INIT_HLIST_HEAD(&vn->sock_list[h]);
return 0;
}
11f: 48 63 ff movslq %edi,%rdi
/* Find VXLAN socket based on network namespace, address family and UDP port
* and enabled unshareable flags.
*/
static struct vxlan_sock *vxlan_find_sock(struct net *net, sa_family_t family,
__be16 port, u32 flags)
{
122: 49 8b 54 f9 18 mov 0x18(%r9,%rdi,8),%rdx
127: c1 e8 18 shr $0x18,%eax
12a: 48 8d 44 c2 10 lea 0x10(%rdx,%rax,8),%rax
})
static __always_inline
void __read_once_size(const volatile void *p, void *res, int size)
{
__READ_ONCE_SIZE;
12f: 48 8b 00 mov (%rax),%rax
132: 48 85 c0 test %rax,%rax
135: 75 1a jne 151 <vxlan_find_sock+0x61>
/* Socket hash table head */
static inline struct hlist_head *vs_head(struct net *net, __be16 port)
{
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
return &vn->sock_list[hash_32(ntohs(port), PORT_HASH_BITS)];
137: 31 c0 xor %eax,%eax
139: 5d pop %rbp
static struct vxlan_sock *vxlan_find_sock(struct net *net, sa_family_t family,
__be16 port, u32 flags)
{
struct vxlan_sock *vs;
flags &= VXLAN_F_RCV_FLAGS;
13a: c3 retq
13b: 66 3b 72 10 cmp 0x10(%rdx),%si
13f: 75 08 jne 149 <vxlan_find_sock+0x59>
/* Find VXLAN socket based on network namespace, address family and UDP port
* and enabled unshareable flags.
*/
static struct vxlan_sock *vxlan_find_sock(struct net *net, sa_family_t family,
__be16 port, u32 flags)
{
141: 3b 88 1c 20 00 00 cmp 0x201c(%rax),%ecx
struct vxlan_sock *vs;
flags &= VXLAN_F_RCV_FLAGS;
hlist_for_each_entry_rcu(vs, vs_head(net, port), hlist) {
147: 74 f0 je 139 <vxlan_find_sock+0x49>
149: 48 8b 00 mov (%rax),%rax
14c: 48 85 c0 test %rax,%rax
14f: 74 e8 je 139 <vxlan_find_sock+0x49>
151: 48 8b 50 10 mov 0x10(%rax),%rdx
155: 48 8b 52 20 mov 0x20(%rdx),%rdx
159: 66 44 3b 82 e0 02 00 cmp 0x2e0(%rdx),%r8w
160: 00
161: 75 e6 jne 149 <vxlan_find_sock+0x59>
163: eb d6 jmp 13b <vxlan_find_sock+0x4b>
165: 90 nop
166: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
16d: 00 00 00
0000000000000170 <vxlan_find_vni>:
if (inet_sk(vs->sock->sk)->inet_sport == port &&
170: e8 00 00 00 00 callq 175 <vxlan_find_vni+0x5>
vxlan_get_sk_family(vs) == family &&
175: 55 push %rbp
176: 48 89 e5 mov %rsp,%rbp
179: 53 push %rbx
17a: 89 f3 mov %esi,%ebx
{
struct vxlan_sock *vs;
flags &= VXLAN_F_RCV_FLAGS;
hlist_for_each_entry_rcu(vs, vs_head(net, port), hlist) {
17c: 89 d6 mov %edx,%esi
17e: 0f b7 d1 movzwl %cx,%edx
if (inet_sk(vs->sock->sk)->inet_sport == port &&
181: 44 89 c1 mov %r8d,%ecx
184: 0f b7 f6 movzwl %si,%esi
187: e8 64 ff ff ff callq f0 <vxlan_find_sock>
18c: 48 85 c0 test %rax,%rax
18f: 74 2f je 1c0 <vxlan_find_vni+0x50>
191: f6 80 1d 20 00 00 20 testb $0x20,0x201d(%rax)
198: 75 2b jne 1c5 <vxlan_find_vni+0x55>
19a: 69 d3 47 86 c8 61 imul $0x61c88647,%ebx,%edx
/* Look up VNI in a per net namespace table */
static struct vxlan_dev *vxlan_find_vni(struct net *net, __be32 vni,
sa_family_t family, __be16 port,
u32 flags)
{
1a0: c1 ea 16 shr $0x16,%edx
1a3: 48 8d 44 d0 10 lea 0x10(%rax,%rdx,8),%rax
1a8: 48 8b 50 08 mov 0x8(%rax),%rdx
1ac: 31 c0 xor %eax,%eax
struct vxlan_sock *vs;
vs = vxlan_find_sock(net, family, port, flags);
1ae: 48 85 d2 test %rdx,%rdx
1b1: 74 0f je 1c2 <vxlan_find_vni+0x52>
1b3: 3b 5a 60 cmp 0x60(%rdx),%ebx
1b6: 74 13 je 1cb <vxlan_find_vni+0x5b>
1b8: 48 8b 12 mov (%rdx),%rdx
1bb: 48 85 d2 test %rdx,%rdx
if (!vs)
1be: 75 f3 jne 1b3 <vxlan_find_vni+0x43>
1c0: 31 c0 xor %eax,%eax
static struct vxlan_dev *vxlan_vs_find_vni(struct vxlan_sock *vs, __be32 vni)
{
struct vxlan_dev *vxlan;
/* For flow based devices, map all packets to VNI 0 */
if (vs->flags & VXLAN_F_COLLECT_METADATA)
1c2: 5b pop %rbx
1c3: 5d pop %rbp
1c4: c3 retq
1c5: 31 d2 xor %edx,%edx
1c7: 31 db xor %ebx,%ebx
1c9: eb d8 jmp 1a3 <vxlan_find_vni+0x33>
1cb: 48 89 d0 mov %rdx,%rax
1ce: 5b pop %rbx
1cf: 5d pop %rbp
1d0: c3 retq
1d1: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
vni = 0;
hlist_for_each_entry_rcu(vxlan, vni_head(vs, vni), hlist) {
1d6: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
1dd: 00 00 00
00000000000001e0 <vxlan_validate>:
1e0: e8 00 00 00 00 callq 1e5 <vxlan_validate+0x5>
if (vxlan->default_dst.remote_vni == vni)
1e5: 55 push %rbp
1e6: 48 8b 47 08 mov 0x8(%rdi),%rax
1ea: 48 89 e5 mov %rsp,%rbp
/* For flow based devices, map all packets to VNI 0 */
if (vs->flags & VXLAN_F_COLLECT_METADATA)
vni = 0;
hlist_for_each_entry_rcu(vxlan, vni_head(vs, vni), hlist) {
1ed: 48 85 c0 test %rax,%rax
{
struct vxlan_sock *vs;
vs = vxlan_find_sock(net, family, port, flags);
if (!vs)
return NULL;
1f0: 74 3c je 22e <vxlan_validate+0x4e>
return vxlan_vs_find_vni(vs, vni);
}
1f2: 66 83 38 0a cmpw $0xa,(%rax)
static struct vxlan_dev *vxlan_vs_find_vni(struct vxlan_sock *vs, __be32 vni)
{
struct vxlan_dev *vxlan;
/* For flow based devices, map all packets to VNI 0 */
if (vs->flags & VXLAN_F_COLLECT_METADATA)
1f6: 74 26 je 21e <vxlan_validate+0x3e>
vni = 0;
1f8: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
hlist_for_each_entry_rcu(vxlan, vni_head(vs, vni), hlist) {
if (vxlan->default_dst.remote_vni == vni)
1fd: b8 ea ff ff ff mov $0xffffffea,%eax
vs = vxlan_find_sock(net, family, port, flags);
if (!vs)
return NULL;
return vxlan_vs_find_vni(vs, vni);
}
202: 5d pop %rbp
203: c3 retq
204: 48 c7 c6 00 00 00 00 mov $0x0,%rsi
20b: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
[IFLA_VXLAN_GPE] = { .type = NLA_FLAG, },
[IFLA_VXLAN_REMCSUM_NOPARTIAL] = { .type = NLA_FLAG },
};
static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[])
{
212: e8 00 00 00 00 callq 217 <vxlan_validate+0x37>
if (tb[IFLA_ADDRESS]) {
217: b8 ea ff ff ff mov $0xffffffea,%eax
[IFLA_VXLAN_GPE] = { .type = NLA_FLAG, },
[IFLA_VXLAN_REMCSUM_NOPARTIAL] = { .type = NLA_FLAG },
};
static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[])
{
21c: 5d pop %rbp
if (tb[IFLA_ADDRESS]) {
21d: c3 retq
21e: 8b 50 04 mov 0x4(%rax),%edx
221: f6 c2 01 test $0x1,%dl
if (nla_len(tb[IFLA_ADDRESS]) != ETH_ALEN) {
224: 75 46 jne 26c <vxlan_validate+0x8c>
226: 0f b7 40 08 movzwl 0x8(%rax),%eax
22a: 09 d0 or %edx,%eax
22c: 74 3e je 26c <vxlan_validate+0x8c>
= nla_data(data[IFLA_VXLAN_PORT_RANGE]);
if (ntohs(p->high) < ntohs(p->low)) {
pr_debug("port range %u .. %u not valid\n",
ntohs(p->low), ntohs(p->high));
return -EINVAL;
22e: 48 85 f6 test %rsi,%rsi
231: 74 5e je 291 <vxlan_validate+0xb1>
}
}
return 0;
}
233: 48 8b 46 08 mov 0x8(%rsi),%rax
static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[])
{
if (tb[IFLA_ADDRESS]) {
if (nla_len(tb[IFLA_ADDRESS]) != ETH_ALEN) {
pr_debug("invalid link address (not ethernet)\n");
237: 48 85 c0 test %rax,%rax
23a: 74 09 je 245 <vxlan_validate+0x65>
23c: 81 78 04 fe ff ff 00 cmpl $0xfffffe,0x4(%rax)
243: 77 71 ja 2b6 <vxlan_validate+0xd6>
245: 48 8b 46 50 mov 0x50(%rsi),%rax
return -EINVAL;
249: 48 85 c0 test %rax,%rax
return -EINVAL;
}
}
return 0;
}
24c: 74 64 je 2b2 <vxlan_validate+0xd2>
* By definition the broadcast address is also a multicast address.
*/
static inline bool is_multicast_ether_addr(const u8 *addr)
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
u32 a = *(const u32 *)addr;
24e: 0f b7 48 06 movzwl 0x6(%rax),%ecx
*/
static inline bool is_valid_ether_addr(const u8 *addr)
{
/* FF:FF:FF:FF:FF:FF is a multicast address so we don't need to
* explicitly check for it here. */
return !is_multicast_ether_addr(addr) && !is_zero_ether_addr(addr);
252: 0f b7 50 04 movzwl 0x4(%rax),%edx
256: 31 c0 xor %eax,%eax
258: 66 c1 c1 08 rol $0x8,%cx
25c: 66 c1 c2 08 rol $0x8,%dx
pr_debug("invalid all zero ethernet address\n");
return -EADDRNOTAVAIL;
}
}
if (!data)
260: 66 39 d1 cmp %dx,%cx
return -EINVAL;
if (data[IFLA_VXLAN_ID]) {
263: 73 9d jae 202 <vxlan_validate+0x22>
265: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
26a: eb 91 jmp 1fd <vxlan_validate+0x1d>
__u32 id = nla_get_u32(data[IFLA_VXLAN_ID]);
if (id >= VXLAN_VID_MASK)
26c: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
271: b8 9d ff ff ff mov $0xffffff9d,%eax
return -ERANGE;
}
if (data[IFLA_VXLAN_PORT_RANGE]) {
276: 5d pop %rbp
277: c3 retq
278: 0f b7 c9 movzwl %cx,%ecx
27b: 0f b7 d2 movzwl %dx,%edx
const struct ifla_vxlan_port_range *p
= nla_data(data[IFLA_VXLAN_PORT_RANGE]);
if (ntohs(p->high) < ntohs(p->low)) {
27e: 48 c7 c6 00 00 00 00 mov $0x0,%rsi
285: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
28c: e8 00 00 00 00 callq 291 <vxlan_validate+0xb1>
291: b8 ea ff ff ff mov $0xffffffea,%eax
296: 5d pop %rbp
297: c3 retq
298: 48 c7 c6 00 00 00 00 mov $0x0,%rsi
#include <linux/stringify.h>
#include <linux/types.h>
static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
{
asm_volatile_goto("1:"
29f: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
return -EINVAL;
}
}
return 0;
}
2a6: e8 00 00 00 00 callq 2ab <vxlan_validate+0xcb>
if (data[IFLA_VXLAN_PORT_RANGE]) {
const struct ifla_vxlan_port_range *p
= nla_data(data[IFLA_VXLAN_PORT_RANGE]);
if (ntohs(p->high) < ntohs(p->low)) {
pr_debug("port range %u .. %u not valid\n",
2ab: b8 9d ff ff ff mov $0xffffff9d,%eax
2b0: 5d pop %rbp
2b1: c3 retq
2b2: 31 c0 xor %eax,%eax
2b4: 5d pop %rbp
2b5: c3 retq
2b6: b8 de ff ff ff mov $0xffffffde,%eax
2bb: 5d pop %rbp
2bc: c3 retq
2bd: 0f 1f 00 nopl (%rax)
00000000000002c0 <vxlan_fdb_free>:
2c0: e8 00 00 00 00 callq 2c5 <vxlan_fdb_free+0x5>
ntohs(p->low), ntohs(p->high));
return -EINVAL;
2c5: 55 push %rbp
}
}
return 0;
}
2c6: 48 89 e5 mov %rsp,%rbp
pr_debug("invalid link address (not ethernet)\n");
return -EINVAL;
}
if (!is_valid_ether_addr(nla_data(tb[IFLA_ADDRESS]))) {
pr_debug("invalid all zero ethernet address\n");
2c9: 41 56 push %r14
2cb: 41 55 push %r13
2cd: 41 54 push %r12
2cf: 53 push %rbx
2d0: 4c 8d 6f 20 lea 0x20(%rdi),%r13
2d4: 48 8b 47 20 mov 0x20(%rdi),%rax
2d8: 4c 8d 77 f0 lea -0x10(%rdi),%r14
return -EADDRNOTAVAIL;
2dc: 48 8b 08 mov (%rax),%rcx
2df: 49 39 c5 cmp %rax,%r13
ntohs(p->low), ntohs(p->high));
return -EINVAL;
}
}
return 0;
2e2: 4c 8d 60 d8 lea -0x28(%rax),%r12
return -EINVAL;
if (data[IFLA_VXLAN_ID]) {
__u32 id = nla_get_u32(data[IFLA_VXLAN_ID]);
if (id >= VXLAN_VID_MASK)
return -ERANGE;
2e6: 48 8d 59 d8 lea -0x28(%rcx),%rbx
2ea: 74 26 je 312 <vxlan_fdb_free+0x52>
return -EINVAL;
}
}
return 0;
}
2ec: 49 8d 7c 24 48 lea 0x48(%r12),%rdi
return 0;
}
static void vxlan_fdb_free(struct rcu_head *head)
{
2f1: e8 00 00 00 00 callq 2f6 <vxlan_fdb_free+0x36>
2f6: 4c 89 e7 mov %r12,%rdi
2f9: 49 89 dc mov %rbx,%r12
2fc: e8 00 00 00 00 callq 301 <vxlan_fdb_free+0x41>
struct vxlan_fdb *f = container_of(head, struct vxlan_fdb, rcu);
struct vxlan_rdst *rd, *nd;
list_for_each_entry_safe(rd, nd, &f->remotes, list) {
301: 48 8d 43 28 lea 0x28(%rbx),%rax
305: 48 8b 53 28 mov 0x28(%rbx),%rdx
return 0;
}
static void vxlan_fdb_free(struct rcu_head *head)
{
struct vxlan_fdb *f = container_of(head, struct vxlan_fdb, rcu);
309: 4c 39 e8 cmp %r13,%rax
struct vxlan_rdst *rd, *nd;
list_for_each_entry_safe(rd, nd, &f->remotes, list) {
30c: 48 8d 5a d8 lea -0x28(%rdx),%rbx
310: 75 da jne 2ec <vxlan_fdb_free+0x2c>
312: 4c 89 f7 mov %r14,%rdi
315: e8 00 00 00 00 callq 31a <vxlan_fdb_free+0x5a>
31a: 5b pop %rbx
31b: 41 5c pop %r12
dst_cache_destroy(&rd->dst_cache);
31d: 41 5d pop %r13
31f: 41 5e pop %r14
321: 5d pop %rbp
322: c3 retq
323: 0f 1f 00 nopl (%rax)
kfree(rd);
326: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
32d: 00 00 00
0000000000000330 <gro_cell_poll>:
330: 55 push %rbp
static void vxlan_fdb_free(struct rcu_head *head)
{
struct vxlan_fdb *f = container_of(head, struct vxlan_fdb, rcu);
struct vxlan_rdst *rd, *nd;
list_for_each_entry_safe(rd, nd, &f->remotes, list) {
331: 48 89 e5 mov %rsp,%rbp
334: 41 56 push %r14
336: 41 55 push %r13
338: 45 31 ed xor %r13d,%r13d
33b: 85 f6 test %esi,%esi
33d: 41 54 push %r12
33f: 53 push %rbx
340: 7e 6e jle 3b0 <gro_cell_poll+0x80>
dst_cache_destroy(&rd->dst_cache);
kfree(rd);
}
kfree(f);
342: 41 89 f5 mov %esi,%r13d
345: 48 8b 77 e8 mov -0x18(%rdi),%rsi
349: 4c 8d 77 e8 lea -0x18(%rdi),%r14
}
34d: 48 89 fb mov %rdi,%rbx
350: 49 39 f6 cmp %rsi,%r14
353: 74 4d je 3a2 <gro_cell_poll+0x72>
355: 48 85 f6 test %rsi,%rsi
358: 74 48 je 3a2 <gro_cell_poll+0x72>
35a: 45 31 e4 xor %r12d,%r12d
35d: 83 6b f8 01 subl $0x1,-0x8(%rbx)
return NET_RX_SUCCESS;
}
/* called under BH context */
static inline int gro_cell_poll(struct napi_struct *napi, int budget)
{
361: 48 89 df mov %rbx,%rdi
364: 41 83 c4 01 add $0x1,%r12d
struct gro_cell *cell = container_of(napi, struct gro_cell, napi);
struct sk_buff *skb;
int work_done = 0;
368: 48 8b 16 mov (%rsi),%rdx
while (work_done < budget) {
36b: 48 8b 46 08 mov 0x8(%rsi),%rax
return NET_RX_SUCCESS;
}
/* called under BH context */
static inline int gro_cell_poll(struct napi_struct *napi, int budget)
{
36f: 48 c7 06 00 00 00 00 movq $0x0,(%rsi)
* The reference count is not incremented and the reference is therefore
* volatile. Use with caution.
*/
static inline struct sk_buff *skb_peek(const struct sk_buff_head *list_)
{
struct sk_buff *skb = list_->next;
376: 48 c7 46 08 00 00 00 movq $0x0,0x8(%rsi)
37d: 00
struct gro_cell *cell = container_of(napi, struct gro_cell, napi);
struct sk_buff *skb;
int work_done = 0;
while (work_done < budget) {
skb = __skb_dequeue(&cell->napi_skbs);
37e: 48 89 42 08 mov %rax,0x8(%rdx)
*/
struct sk_buff *skb_dequeue(struct sk_buff_head *list);
static inline struct sk_buff *__skb_dequeue(struct sk_buff_head *list)
{
struct sk_buff *skb = skb_peek(list);
if (skb)
382: 48 89 10 mov %rdx,(%rax)
385: e8 00 00 00 00 callq 38a <gro_cell_poll+0x5a>
38a: 45 39 e5 cmp %r12d,%r13d
void skb_unlink(struct sk_buff *skb, struct sk_buff_head *list);
static inline void __skb_unlink(struct sk_buff *skb, struct sk_buff_head *list)
{
struct sk_buff *next, *prev;
list->qlen--;
38d: 74 21 je 3b0 <gro_cell_poll+0x80>
38f: 48 8b 73 e8 mov -0x18(%rbx),%rsi
if (!skb)
break;
napi_gro_receive(napi, skb);
393: 48 85 f6 test %rsi,%rsi
work_done++;
396: 74 05 je 39d <gro_cell_poll+0x6d>
next = skb->next;
398: 4c 39 f6 cmp %r14,%rsi
prev = skb->prev;
39b: 75 c0 jne 35d <gro_cell_poll+0x2d>
39d: 45 89 e5 mov %r12d,%r13d
skb->next = skb->prev = NULL;
3a0: eb 03 jmp 3a5 <gro_cell_poll+0x75>
3a2: 45 31 ed xor %r13d,%r13d
3a5: 44 89 ee mov %r13d,%esi
3a8: 48 89 df mov %rbx,%rdi
3ab: e8 00 00 00 00 callq 3b0 <gro_cell_poll+0x80>
next->prev = prev;
3b0: 5b pop %rbx
3b1: 44 89 e8 mov %r13d,%eax
prev->next = next;
3b4: 41 5c pop %r12
while (work_done < budget) {
skb = __skb_dequeue(&cell->napi_skbs);
if (!skb)
break;
napi_gro_receive(napi, skb);
3b6: 41 5d pop %r13
3b8: 41 5e pop %r14
{
struct gro_cell *cell = container_of(napi, struct gro_cell, napi);
struct sk_buff *skb;
int work_done = 0;
while (work_done < budget) {
3ba: 5d pop %rbp
3bb: c3 retq
3bc: 0f 1f 40 00 nopl 0x0(%rax)
00000000000003c0 <vxlan_setup>:
* The reference count is not incremented and the reference is therefore
* volatile. Use with caution.
*/
static inline struct sk_buff *skb_peek(const struct sk_buff_head *list_)
{
struct sk_buff *skb = list_->next;
3c0: e8 00 00 00 00 callq 3c5 <vxlan_setup+0x5>
*/
struct sk_buff *skb_dequeue(struct sk_buff_head *list);
static inline struct sk_buff *__skb_dequeue(struct sk_buff_head *list)
{
struct sk_buff *skb = skb_peek(list);
if (skb)
3c5: 55 push %rbp
3c6: be 06 00 00 00 mov $0x6,%esi
3cb: 48 89 e5 mov %rsp,%rbp
skb = __skb_dequeue(&cell->napi_skbs);
if (!skb)
break;
napi_gro_receive(napi, skb);
work_done++;
3ce: 41 55 push %r13
3d0: 41 54 push %r12
/* called under BH context */
static inline int gro_cell_poll(struct napi_struct *napi, int budget)
{
struct gro_cell *cell = container_of(napi, struct gro_cell, napi);
struct sk_buff *skb;
int work_done = 0;
3d2: 53 push %rbx
3d3: 48 8b 9f 38 03 00 00 mov 0x338(%rdi),%rbx
napi_gro_receive(napi, skb);
work_done++;
}
if (work_done < budget)
napi_complete_done(napi, work_done);
3da: 49 89 fc mov %rdi,%r12
3dd: c6 87 74 02 00 00 01 movb $0x1,0x274(%rdi)
return work_done;
}
3e4: 48 89 df mov %rbx,%rdi
3e7: e8 00 00 00 00 callq 3ec <vxlan_setup+0x2c>
3ec: 0f b6 03 movzbl (%rbx),%eax
3ef: 4c 89 e7 mov %r12,%rdi
spin_unlock(&vn->sock_lock);
}
/* Initialize the device structure. */
static void vxlan_setup(struct net_device *dev)
{
3f2: 83 e0 fe and $0xfffffffe,%eax
3f5: 83 c8 02 or $0x2,%eax
* Generate a random Ethernet address (MAC) that is not multicast
* and has the local assigned bit set.
*/
static inline void eth_random_addr(u8 *addr)
{
get_random_bytes(addr, ETH_ALEN);
3f8: 88 03 mov %al,(%rbx)
3fa: 49 8d 9c 24 40 08 00 lea 0x840(%r12),%rbx
401: 00
402: e8 00 00 00 00 callq 407 <vxlan_setup+0x47>
407: 49 8b 8c 24 f0 00 00 mov 0xf0(%r12),%rcx
40e: 00
* and set addr_assign_type so the state can be read by sysfs and be
* used by userspace.
*/
static inline void eth_hw_addr_random(struct net_device *dev)
{
dev->addr_assign_type = NET_ADDR_RANDOM;
40f: 48 b8 89 10 3b 80 20 movabs $0x420803b1089,%rax
416: 04 00 00
* Generate a random Ethernet address (MAC) that is not multicast
* and has the local assigned bit set.
*/
static inline void eth_random_addr(u8 *addr)
{
get_random_bytes(addr, ETH_ALEN);
419: 48 ba 09 10 3b 80 20 movabs $0x20803b1009,%rdx
420: 00 00 00
addr[0] &= 0xfe; /* clear multicast bit */
addr[0] |= 0x02; /* set local assignment bit (IEEE802) */
423: 49 8d bc 24 e0 08 00 lea 0x8e0(%r12),%rdi
42a: 00
dev->hw_features |= NETIF_F_GSO_SOFTWARE;
dev->hw_features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
netif_keep_dst(dev);
dev->priv_flags |= IFF_NO_QUEUE;
INIT_LIST_HEAD(&vxlan->next);
42b: be 00 00 08 00 mov $0x80000,%esi
430: 49 c7 84 24 70 04 00 movq $0x0,0x470(%r12)
437: 00 00 00 00 00
ether_setup(dev);
dev->destructor = free_netdev;
SET_NETDEV_DEVTYPE(dev, &vxlan_type);
dev->features |= NETIF_F_LLTX;
43c: 49 c7 84 24 f8 04 00 movq $0x0,0x4f8(%r12)
443: 00 00 00 00 00
dev->features |= NETIF_F_SG | NETIF_F_HW_CSUM;
dev->features |= NETIF_F_RXCSUM;
dev->features |= NETIF_F_GSO_SOFTWARE;
dev->vlan_features = dev->features;
dev->features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
448: 41 c7 84 24 28 09 00 movl $0x0,0x928(%r12)
44f: 00 00 00 00 00
dev->priv_flags |= IFF_NO_QUEUE;
INIT_LIST_HEAD(&vxlan->next);
spin_lock_init(&vxlan->hash_lock);
init_timer_deferrable(&vxlan->age_timer);
454: 48 09 c8 or %rcx,%rax
457: 48 09 ca or %rcx,%rdx
45a: 31 c9 xor %ecx,%ecx
45c: 49 89 84 24 f0 00 00 mov %rax,0xf0(%r12)
463: 00
unsigned int h;
eth_hw_addr_random(dev);
ether_setup(dev);
dev->destructor = free_netdev;
464: 48 b8 89 00 3b 80 20 movabs $0x420803b0089,%rax
46b: 04 00 00
SET_NETDEV_DEVTYPE(dev, &vxlan_type);
46e: 49 09 84 24 f8 00 00 or %rax,0xf8(%r12)
475: 00
476: 41 8b 84 24 3c 02 00 mov 0x23c(%r12),%eax
47d: 00
dev->hw_features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
netif_keep_dst(dev);
dev->priv_flags |= IFF_NO_QUEUE;
INIT_LIST_HEAD(&vxlan->next);
spin_lock_init(&vxlan->hash_lock);
47e: 49 89 94 24 08 01 00 mov %rdx,0x108(%r12)
485: 00
dev->features |= NETIF_F_SG | NETIF_F_HW_CSUM;
dev->features |= NETIF_F_RXCSUM;
dev->features |= NETIF_F_GSO_SOFTWARE;
dev->vlan_features = dev->features;
dev->features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
486: 31 d2 xor %edx,%edx
dev->features |= NETIF_F_LLTX;
dev->features |= NETIF_F_SG | NETIF_F_HW_CSUM;
dev->features |= NETIF_F_RXCSUM;
dev->features |= NETIF_F_GSO_SOFTWARE;
dev->vlan_features = dev->features;
488: 25 df ff fd ff and $0xfffdffdf,%eax
dev->features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
48d: 0d 00 00 20 00 or $0x200000,%eax
492: 41 89 84 24 3c 02 00 mov %eax,0x23c(%r12)
499: 00
dev->hw_features |= NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_RXCSUM;
dev->hw_features |= NETIF_F_GSO_SOFTWARE;
dev->hw_features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
49a: 49 8d 84 24 50 08 00 lea 0x850(%r12),%rax
4a1: 00
4a2: 49 89 84 24 50 08 00 mov %rax,0x850(%r12)
4a9: 00
netif_keep_dst(dev);
dev->priv_flags |= IFF_NO_QUEUE;
4aa: 49 89 84 24 58 08 00 mov %rax,0x858(%r12)
4b1: 00
dev->features |= NETIF_F_LLTX;
dev->features |= NETIF_F_SG | NETIF_F_HW_CSUM;
dev->features |= NETIF_F_RXCSUM;
dev->features |= NETIF_F_GSO_SOFTWARE;
dev->vlan_features = dev->features;
4b2: e8 00 00 00 00 callq 4b7 <vxlan_setup+0xf7>
dev->priv_flags |= IFF_NO_QUEUE;
INIT_LIST_HEAD(&vxlan->next);
spin_lock_init(&vxlan->hash_lock);
init_timer_deferrable(&vxlan->age_timer);
4b7: 0f b7 05 00 00 00 00 movzwl 0x0(%rip),%eax # 4be <vxlan_setup+0xfe>
dev->features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
dev->hw_features |= NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_RXCSUM;
dev->hw_features |= NETIF_F_GSO_SOFTWARE;
dev->hw_features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
netif_keep_dst(dev);
dev->priv_flags |= IFF_NO_QUEUE;
4be: 49 c7 84 24 f8 08 00 movq $0x0,0x8f8(%r12)
4c5: 00 00 00 00 00
INIT_LIST_HEAD(&vxlan->next);
4ca: be 08 00 00 00 mov $0x8,%esi
4cf: 49 89 9c 24 00 09 00 mov %rbx,0x900(%r12)
4d6: 00
{
switch (size) {
case 1: *(volatile __u8 *)p = *(__u8 *)res; break;
case 2: *(volatile __u16 *)p = *(__u16 *)res; break;
case 4: *(volatile __u32 *)p = *(__u32 *)res; break;
case 8: *(volatile __u64 *)p = *(__u64 *)res; break;
4d7: 4d 89 a4 24 70 08 00 mov %r12,0x870(%r12)
4de: 00
4df: bf e0 00 00 00 mov $0xe0,%edi
spin_lock_init(&vxlan->hash_lock);
init_timer_deferrable(&vxlan->age_timer);
4e4: 66 c1 c0 08 rol $0x8,%ax
vxlan->age_timer.function = vxlan_cleanup;
vxlan->age_timer.data = (unsigned long) vxlan;
vxlan->cfg.dst_port = htons(vxlan_port);
4e8: 66 41 89 84 24 7c 09 mov %ax,0x97c(%r12)
4ef: 00 00
INIT_LIST_HEAD(&vxlan->next);
spin_lock_init(&vxlan->hash_lock);
init_timer_deferrable(&vxlan->age_timer);
vxlan->age_timer.function = vxlan_cleanup;
4f1: e8 00 00 00 00 callq 4f6 <vxlan_setup+0x136>
4f6: 48 85 c0 test %rax,%rax
4f9: 49 89 84 24 30 09 00 mov %rax,0x930(%r12)
500: 00
vxlan->age_timer.data = (unsigned long) vxlan;
501: 74 77 je 57a <vxlan_setup+0x1ba>
503: 41 bd ff ff ff ff mov $0xffffffff,%r13d
vxlan->cfg.dst_port = htons(vxlan_port);
vxlan->dev = dev;
509: eb 0a jmp 515 <vxlan_setup+0x155>
50b: f0 80 63 28 fe lock andb $0xfe,0x28(%rbx)
static inline int gro_cells_init(struct gro_cells *gcells, struct net_device *dev)
{
int i;
gcells->cells = alloc_percpu(struct gro_cell);
510: f0 80 63 28 fb lock andb $0xfb,0x28(%rbx)
init_timer_deferrable(&vxlan->age_timer);
vxlan->age_timer.function = vxlan_cleanup;
vxlan->age_timer.data = (unsigned long) vxlan;
vxlan->cfg.dst_port = htons(vxlan_port);
515: 41 8d 55 01 lea 0x1(%r13),%edx
519: be 00 01 00 00 mov $0x100,%esi
51e: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
525: 48 63 d2 movslq %edx,%rdx
if (!gcells->cells)
528: e8 00 00 00 00 callq 52d <vxlan_setup+0x16d>
static inline int gro_cells_init(struct gro_cells *gcells, struct net_device *dev)
{
int i;
gcells->cells = alloc_percpu(struct gro_cell);
52d: 3b 05 00 00 00 00 cmp 0x0(%rip),%eax # 533 <vxlan_setup+0x173>
if (!gcells->cells)
533: 41 89 c5 mov %eax,%r13d
536: 7d 42 jge 57a <vxlan_setup+0x1ba>
538: 48 98 cltq
53a: 49 8b 9c 24 30 09 00 mov 0x930(%r12),%rbx
541: 00
*/
static __always_inline void
clear_bit(long nr, volatile unsigned long *addr)
{
if (IS_IMMEDIATE(nr)) {
asm volatile(LOCK_PREFIX "andb %1,%0"
542: b9 40 00 00 00 mov $0x40,%ecx
static inline unsigned int cpumask_next(int n, const struct cpumask *srcp)
{
/* -1 is a legal arg here. */
if (n != -1)
cpumask_check(n);
return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
547: 48 03 1c c5 00 00 00 add 0x0(,%rax,8),%rbx
54e: 00
54f: 48 c7 c2 00 00 00 00 mov $0x0,%rdx
556: 4c 89 e7 mov %r12,%rdi
559: 48 8d 73 18 lea 0x18(%rbx),%rsi
return -ENOMEM;
for_each_possible_cpu(i) {
55d: 48 89 1b mov %rbx,(%rbx)
560: 48 89 5b 08 mov %rbx,0x8(%rbx)
564: c7 43 10 00 00 00 00 movl $0x0,0x10(%rbx)
struct gro_cell *cell = per_cpu_ptr(gcells->cells, i);
56b: e8 00 00 00 00 callq 570 <vxlan_setup+0x1b0>
570: 48 8b 43 28 mov 0x28(%rbx),%rax
__skb_queue_head_init(&cell->napi_skbs);
netif_napi_add(dev, &cell->napi, gro_cell_poll, 64);
574: a8 01 test $0x1,%al
576: 75 93 jne 50b <vxlan_setup+0x14b>
gcells->cells = alloc_percpu(struct gro_cell);
if (!gcells->cells)
return -ENOMEM;
for_each_possible_cpu(i) {
struct gro_cell *cell = per_cpu_ptr(gcells->cells, i);
578: 0f 0b ud2
57a: 49 8d 84 24 a0 09 00 lea 0x9a0(%r12),%rax
581: 00
__skb_queue_head_init(&cell->napi_skbs);
netif_napi_add(dev, &cell->napi, gro_cell_poll, 64);
582: 49 8d 94 24 a0 11 00 lea 0x11a0(%r12),%rdx
589: 00
58a: 48 c7 00 00 00 00 00 movq $0x0,(%rax)
* the spinlock. It can also be used for on-stack sk_buff_head
* objects where the spinlock is known to not be used.
*/
static inline void __skb_queue_head_init(struct sk_buff_head *list)
{
list->prev = list->next = (struct sk_buff *)list;
591: 48 83 c0 08 add $0x8,%rax
list->qlen = 0;
595: 48 39 d0 cmp %rdx,%rax
598: 75 f0 jne 58a <vxlan_setup+0x1ca>
59a: 5b pop %rbx
59b: 41 5c pop %r12
59d: 41 5d pop %r13
59f: 5d pop %rbp
}
static __always_inline bool constant_test_bit(long nr, const volatile unsigned long *addr)
{
return ((1UL << (nr & (BITS_PER_LONG-1))) &
(addr[nr >> _BITOPS_LONG_SHIFT])) != 0;
5a0: c3 retq
5a1: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
* Resume NAPI from being scheduled on this context.
* Must be paired with napi_disable.
*/
static inline void napi_enable(struct napi_struct *n)
{
BUG_ON(!test_bit(NAPI_STATE_SCHED, &n->state));
5a6: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
5ad: 00 00 00
00000000000005b0 <vxlan_change_mtu>:
5b0: e8 00 00 00 00 callq 5b5 <vxlan_change_mtu+0x5>
5b5: 55 push %rbp
5b6: 48 89 e5 mov %rsp,%rbp
5b9: 41 54 push %r12
vxlan->dev = dev;
gro_cells_init(&vxlan->gro_cells, dev);
for (h = 0; h < FDB_HASH_SIZE; ++h)
INIT_HLIST_HEAD(&vxlan->fdb_head[h]);
5bb: 41 89 f4 mov %esi,%r12d
5be: 53 push %rbx
5bf: 48 89 fb mov %rdi,%rbx
5c2: 8b b7 a4 08 00 00 mov 0x8a4(%rdi),%esi
vxlan->dev = dev;
gro_cells_init(&vxlan->gro_cells, dev);
for (h = 0; h < FDB_HASH_SIZE; ++h)
5c8: 48 8b bf 78 08 00 00 mov 0x878(%rdi),%rdi
INIT_HLIST_HEAD(&vxlan->fdb_head[h]);
}
5cf: e8 00 00 00 00 callq 5d4 <vxlan_change_mtu+0x24>
5d4: 48 85 c0 test %rax,%rax
5d7: 74 30 je 609 <vxlan_change_mtu+0x59>
5d9: 8b 80 48 02 00 00 mov 0x248(%rax),%eax
5df: 8d 50 ba lea -0x46(%rax),%edx
dev->mtu = new_mtu;
return 0;
}
static int vxlan_change_mtu(struct net_device *dev, int new_mtu)
{
5e2: 83 e8 32 sub $0x32,%eax
5e5: 66 83 bb 80 08 00 00 cmpw $0xa,0x880(%rbx)
5ec: 0a
5ed: 0f 44 c2 cmove %edx,%eax
5f0: 41 39 c4 cmp %eax,%r12d
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_rdst *dst = &vxlan->default_dst;
struct net_device *lowerdev = __dev_get_by_index(vxlan->net,
5f3: 7f 1b jg 610 <vxlan_change_mtu+0x60>
5f5: 41 83 fc 43 cmp $0x43,%r12d
5f9: 7e 15 jle 610 <vxlan_change_mtu+0x60>
5fb: 44 89 a3 48 02 00 00 mov %r12d,0x248(%rbx)
602: 31 c0 xor %eax,%eax
struct net_device *lowerdev,
struct vxlan_rdst *dst, int new_mtu, bool strict)
{
int max_mtu = IP_MAX_MTU;
if (lowerdev)
604: 5b pop %rbx
605: 41 5c pop %r12
607: 5d pop %rbp
608: c3 retq
max_mtu = lowerdev->mtu;
609: b8 ff ff 00 00 mov $0xffff,%eax
60e: eb cf jmp 5df <vxlan_change_mtu+0x2f>
if (dst->remote_ip.sa.sa_family == AF_INET6)
max_mtu -= VXLAN6_HEADROOM;
610: b8 ea ff ff ff mov $0xffffffea,%eax
615: eb ed jmp 604 <vxlan_change_mtu+0x54>
617: 66 0f 1f 84 00 00 00 nopw 0x0(%rax,%rax,1)
61e: 00 00
0000000000000620 <vxlan_fdb_parse>:
max_mtu -= VXLAN_HEADROOM;
if (new_mtu < 68)
return -EINVAL;
if (new_mtu > max_mtu) {
620: e8 00 00 00 00 callq 625 <vxlan_fdb_parse+0x5>
625: 55 push %rbp
626: 48 89 e5 mov %rsp,%rbp
629: 41 56 push %r14
return -EINVAL;
new_mtu = max_mtu;
}
dev->mtu = new_mtu;
62b: 41 55 push %r13
62d: 41 54 push %r12
62f: 53 push %rbx
630: 49 89 f5 mov %rsi,%r13
return 0;
633: 48 89 fb mov %rdi,%rbx
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_rdst *dst = &vxlan->default_dst;
struct net_device *lowerdev = __dev_get_by_index(vxlan->net,
dst->remote_ifindex);
return __vxlan_change_mtu(dev, lowerdev, dst, new_mtu, true);
}
636: 49 89 d4 mov %rdx,%r12
static int __vxlan_change_mtu(struct net_device *dev,
struct net_device *lowerdev,
struct vxlan_rdst *dst, int new_mtu, bool strict)
{
int max_mtu = IP_MAX_MTU;
639: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
63d: 48 83 ec 40 sub $0x40,%rsp
if (new_mtu < 68)
return -EINVAL;
if (new_mtu > max_mtu) {
if (strict)
return -EINVAL;
641: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
648: 00 00
64a: 48 89 44 24 38 mov %rax,0x38(%rsp)
64f: 31 c0 xor %eax,%eax
}
static int vxlan_fdb_parse(struct nlattr *tb[], struct vxlan_dev *vxlan,
union vxlan_addr *ip, __be16 *port, __be32 *vni,
u32 *ifindex)
{
651: 48 8b 46 30 mov 0x30(%rsi),%rax
655: 48 8b 77 08 mov 0x8(%rdi),%rsi
659: 48 85 f6 test %rsi,%rsi
65c: 4c 8b b0 80 04 00 00 mov 0x480(%rax),%r14
663: 0f 84 ad 00 00 00 je 716 <vxlan_fdb_parse+0xf6>
669: 0f b7 06 movzwl (%rsi),%eax
66c: 83 e8 04 sub $0x4,%eax
66f: 83 f8 0f cmp $0xf,%eax
672: 0f 87 fd 00 00 00 ja 775 <vxlan_fdb_parse+0x155>
678: 83 f8 03 cmp $0x3,%eax
67b: 0f 86 68 01 00 00 jbe 7e9 <vxlan_fdb_parse+0x1c9>
}
static inline struct net *read_pnet(const possible_net_t *pnet)
{
#ifdef CONFIG_NET_NS
return pnet->net;
681: 8b 46 04 mov 0x4(%rsi),%eax
684: be 02 00 00 00 mov $0x2,%esi
struct net *net = dev_net(vxlan->dev);
int err;
if (tb[NDA_DST]) {
689: 66 89 32 mov %si,(%rdx)
68c: 89 42 04 mov %eax,0x4(%rdx)
68f: 48 8b 43 30 mov 0x30(%rbx),%rax
693: 48 85 c0 test %rax,%rax
696: 0f 84 b5 00 00 00 je 751 <vxlan_fdb_parse+0x131>
return IN_MULTICAST(ntohl(ipa->sin.sin_addr.s_addr));
}
static int vxlan_nla_get_addr(union vxlan_addr *ip, struct nlattr *nla)
{
if (nla_len(nla) >= sizeof(struct in6_addr)) {
69c: 66 83 38 06 cmpw $0x6,(%rax)
6a0: 75 6d jne 70f <vxlan_fdb_parse+0xef>
6a2: 0f b7 40 04 movzwl 0x4(%rax),%eax
6a6: 66 89 01 mov %ax,(%rcx)
ip->sin6.sin6_addr = nla_get_in6_addr(nla);
ip->sa.sa_family = AF_INET6;
return 0;
} else if (nla_len(nla) >= sizeof(__be32)) {
6a9: 48 8b 43 38 mov 0x38(%rbx),%rax
6ad: 48 85 c0 test %rax,%rax
6b0: 0f 84 b3 00 00 00 je 769 <vxlan_fdb_parse+0x149>
ip->sin.sin_addr.s_addr = nla_get_in_addr(nla);
ip->sa.sa_family = AF_INET;
6b6: 66 83 38 08 cmpw $0x8,(%rax)
6ba: 75 53 jne 70f <vxlan_fdb_parse+0xef>
if (nla_len(nla) >= sizeof(struct in6_addr)) {
ip->sin6.sin6_addr = nla_get_in6_addr(nla);
ip->sa.sa_family = AF_INET6;
return 0;
} else if (nla_len(nla) >= sizeof(__be32)) {
ip->sin.sin_addr.s_addr = nla_get_in_addr(nla);
6bc: 8b 40 04 mov 0x4(%rax),%eax
ip->sa.sa_family = AF_INET6;
#endif
}
}
if (tb[NDA_PORT]) {
6bf: 0f c8 bswap %eax
6c1: 41 89 00 mov %eax,(%r8)
6c4: 48 8b 43 40 mov 0x40(%rbx),%rax
6c8: 48 85 c0 test %rax,%rax
6cb: 0f 84 f4 00 00 00 je 7c5 <vxlan_fdb_parse+0x1a5>
if (nla_len(tb[NDA_PORT]) != sizeof(__be16))
6d1: 66 83 38 08 cmpw $0x8,(%rax)
* nla_get_be16 - return payload of __be16 attribute
* @nla: __be16 netlink attribute
*/
static inline __be16 nla_get_be16(const struct nlattr *nla)
{
return *(__be16 *) nla_data(nla);
6d5: 75 38 jne 70f <vxlan_fdb_parse+0xef>
return -EINVAL;
*port = nla_get_be16(tb[NDA_PORT]);
6d7: 8b 70 04 mov 0x4(%rax),%esi
} else {
*port = vxlan->cfg.dst_port;
}
if (tb[NDA_VNI]) {
6da: 4c 89 f7 mov %r14,%rdi
6dd: 41 89 31 mov %esi,(%r9)
6e0: e8 00 00 00 00 callq 6e5 <vxlan_fdb_parse+0xc5>
6e5: 48 83 f8 01 cmp $0x1,%rax
if (nla_len(tb[NDA_VNI]) != sizeof(u32))
6e9: 19 c0 sbb %eax,%eax
6eb: 83 e0 9d and $0xffffff9d,%eax
return -EINVAL;
*vni = cpu_to_be32(nla_get_u32(tb[NDA_VNI]));
6ee: 48 8b 54 24 38 mov 0x38(%rsp),%rdx
6f3: 65 48 33 14 25 28 00 xor %gs:0x28,%rdx
6fa: 00 00
} else {
*vni = vxlan->default_dst.remote_vni;
}
if (tb[NDA_IFINDEX]) {
6fc: 0f 85 f1 00 00 00 jne 7f3 <vxlan_fdb_parse+0x1d3>
struct net_device *tdev;
if (nla_len(tb[NDA_IFINDEX]) != sizeof(u32))
702: 48 8d 65 e0 lea -0x20(%rbp),%rsp
706: 5b pop %rbx
* nla_get_u32 - return payload of u32 attribute
* @nla: u32 netlink attribute
*/
static inline u32 nla_get_u32(const struct nlattr *nla)
{
return *(u32 *) nla_data(nla);
707: 41 5c pop %r12
709: 41 5d pop %r13
return -EINVAL;
*ifindex = nla_get_u32(tb[NDA_IFINDEX]);
tdev = __dev_get_by_index(net, *ifindex);
70b: 41 5e pop %r14
if (tb[NDA_IFINDEX]) {
struct net_device *tdev;
if (nla_len(tb[NDA_IFINDEX]) != sizeof(u32))
return -EINVAL;
*ifindex = nla_get_u32(tb[NDA_IFINDEX]);
70d: 5d pop %rbp
70e: c3 retq
70f: b8 ea ff ff ff mov $0xffffffea,%eax
tdev = __dev_get_by_index(net, *ifindex);
714: eb d8 jmp 6ee <vxlan_fdb_parse+0xce>
return -EADDRNOTAVAIL;
} else {
*ifindex = 0;
}
return 0;
716: 66 41 83 7d 40 02 cmpw $0x2,0x40(%r13)
71c: 0f 84 b1 00 00 00 je 7d3 <vxlan_fdb_parse+0x1b3>
}
722: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 729 <vxlan_fdb_parse+0x109>
729: 48 8b 15 00 00 00 00 mov 0x0(%rip),%rdx # 730 <vxlan_fdb_parse+0x110>
730: 49 89 44 24 08 mov %rax,0x8(%r12)
735: b8 0a 00 00 00 mov $0xa,%eax
73a: 49 89 54 24 10 mov %rdx,0x10(%r12)
}
}
if (tb[NDA_PORT]) {
if (nla_len(tb[NDA_PORT]) != sizeof(__be16))
return -EINVAL;
73f: 66 41 89 04 24 mov %ax,(%r12)
744: 48 8b 43 30 mov 0x30(%rbx),%rax
err = vxlan_nla_get_addr(ip, tb[NDA_DST]);
if (err)
return err;
} else {
union vxlan_addr *remote = &vxlan->default_dst.remote_ip;
if (remote->sa.sa_family == AF_INET) {
748: 48 85 c0 test %rax,%rax
74b: 0f 85 4b ff ff ff jne 69c <vxlan_fdb_parse+0x7c>
751: 41 0f b7 85 3c 01 00 movzwl 0x13c(%r13),%eax
758: 00
ip->sin.sin_addr.s_addr = htonl(INADDR_ANY);
ip->sa.sa_family = AF_INET;
#if IS_ENABLED(CONFIG_IPV6)
} else {
ip->sin6.sin6_addr = in6addr_any;
759: 66 89 01 mov %ax,(%rcx)
75c: 48 8b 43 38 mov 0x38(%rbx),%rax
760: 48 85 c0 test %rax,%rax
763: 0f 85 4d ff ff ff jne 6b6 <vxlan_fdb_parse+0x96>
ip->sa.sa_family = AF_INET6;
769: 41 8b 45 60 mov 0x60(%r13),%eax
if (remote->sa.sa_family == AF_INET) {
ip->sin.sin_addr.s_addr = htonl(INADDR_ANY);
ip->sa.sa_family = AF_INET;
#if IS_ENABLED(CONFIG_IPV6)
} else {
ip->sin6.sin6_addr = in6addr_any;
76d: 41 89 00 mov %eax,(%r8)
ip->sa.sa_family = AF_INET6;
770: e9 4f ff ff ff jmpq 6c4 <vxlan_fdb_parse+0xa4>
#endif
}
}
if (tb[NDA_PORT]) {
775: 48 8d 7c 24 20 lea 0x20(%rsp),%rdi
77a: ba 10 00 00 00 mov $0x10,%edx
77f: 4c 89 4c 24 08 mov %r9,0x8(%rsp)
if (nla_len(tb[NDA_PORT]) != sizeof(__be16))
return -EINVAL;
*port = nla_get_be16(tb[NDA_PORT]);
} else {
*port = vxlan->cfg.dst_port;
784: 4c 89 44 24 10 mov %r8,0x10(%rsp)
789: 48 89 4c 24 18 mov %rcx,0x18(%rsp)
}
if (tb[NDA_VNI]) {
78e: e8 00 00 00 00 callq 793 <vxlan_fdb_parse+0x173>
793: 48 8b 44 24 20 mov 0x20(%rsp),%rax
798: 48 8b 54 24 28 mov 0x28(%rsp),%rdx
if (nla_len(tb[NDA_VNI]) != sizeof(u32))
return -EINVAL;
*vni = cpu_to_be32(nla_get_u32(tb[NDA_VNI]));
} else {
*vni = vxlan->default_dst.remote_vni;
79d: bf 0a 00 00 00 mov $0xa,%edi
7a2: 66 41 89 3c 24 mov %di,(%r12)
*/
static inline struct in6_addr nla_get_in6_addr(const struct nlattr *nla)
{
struct in6_addr tmp;
nla_memcpy(&tmp, nla, sizeof(tmp));
7a7: 48 8b 4c 24 18 mov 0x18(%rsp),%rcx
7ac: 4c 8b 44 24 10 mov 0x10(%rsp),%r8
7b1: 4c 8b 4c 24 08 mov 0x8(%rsp),%r9
7b6: 49 89 44 24 08 mov %rax,0x8(%r12)
7bb: 49 89 54 24 10 mov %rdx,0x10(%r12)
7c0: e9 ca fe ff ff jmpq 68f <vxlan_fdb_parse+0x6f>
return tmp;
7c5: 41 c7 01 00 00 00 00 movl $0x0,(%r9)
7cc: 31 c0 xor %eax,%eax
static int vxlan_nla_get_addr(union vxlan_addr *ip, struct nlattr *nla)
{
if (nla_len(nla) >= sizeof(struct in6_addr)) {
ip->sin6.sin6_addr = nla_get_in6_addr(nla);
ip->sa.sa_family = AF_INET6;
7ce: e9 1b ff ff ff jmpq 6ee <vxlan_fdb_parse+0xce>
7d3: c7 42 04 00 00 00 00 movl $0x0,0x4(%rdx)
7da: ba 02 00 00 00 mov $0x2,%edx
7df: 66 41 89 14 24 mov %dx,(%r12)
7e4: e9 a6 fe ff ff jmpq 68f <vxlan_fdb_parse+0x6f>
}
static int vxlan_nla_get_addr(union vxlan_addr *ip, struct nlattr *nla)
{
if (nla_len(nla) >= sizeof(struct in6_addr)) {
ip->sin6.sin6_addr = nla_get_in6_addr(nla);
7e9: b8 9f ff ff ff mov $0xffffff9f,%eax
7ee: e9 fb fe ff ff jmpq 6ee <vxlan_fdb_parse+0xce>
7f3: e8 00 00 00 00 callq 7f8 <vxlan_fdb_parse+0x1d8>
*ifindex = nla_get_u32(tb[NDA_IFINDEX]);
tdev = __dev_get_by_index(net, *ifindex);
if (!tdev)
return -EADDRNOTAVAIL;
} else {
*ifindex = 0;
7f8: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
7ff: 00
0000000000000800 <vxlan_get_drvinfo>:
}
return 0;
800: e8 00 00 00 00 callq 805 <vxlan_get_drvinfo+0x5>
if (err)
return err;
} else {
union vxlan_addr *remote = &vxlan->default_dst.remote_ip;
if (remote->sa.sa_family == AF_INET) {
ip->sin.sin_addr.s_addr = htonl(INADDR_ANY);
805: 55 push %rbp
806: 48 8d 7e 24 lea 0x24(%rsi),%rdi
ip->sa.sa_family = AF_INET;
80a: ba 20 00 00 00 mov $0x20,%edx
80f: 48 89 e5 mov %rsp,%rbp
812: 53 push %rbx
813: 48 89 f3 mov %rsi,%rbx
816: 48 c7 c6 00 00 00 00 mov $0x0,%rsi
} else if (nla_len(nla) >= sizeof(__be32)) {
ip->sin.sin_addr.s_addr = nla_get_in_addr(nla);
ip->sa.sa_family = AF_INET;
return 0;
} else {
return -EAFNOSUPPORT;
81d: e8 00 00 00 00 callq 822 <vxlan_get_drvinfo+0x22>
822: 48 8d 7b 04 lea 0x4(%rbx),%rdi
} else {
*ifindex = 0;
}
return 0;
}
826: ba 20 00 00 00 mov $0x20,%edx
82b: 48 c7 c6 00 00 00 00 mov $0x0,%rsi
return 0;
}
static void vxlan_get_drvinfo(struct net_device *netdev,
struct ethtool_drvinfo *drvinfo)
{
832: e8 00 00 00 00 callq 837 <vxlan_get_drvinfo+0x37>
strlcpy(drvinfo->version, VXLAN_VERSION, sizeof(drvinfo->version));
837: 5b pop %rbx
838: 5d pop %rbp
839: c3 retq
83a: 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1)
0000000000000840 <neigh_release>:
return 0;
}
static void vxlan_get_drvinfo(struct net_device *netdev,
struct ethtool_drvinfo *drvinfo)
{
840: f0 ff 4f 30 lock decl 0x30(%rdi)
844: 74 01 je 847 <neigh_release+0x7>
strlcpy(drvinfo->version, VXLAN_VERSION, sizeof(drvinfo->version));
846: c3 retq
847: 55 push %rbp
848: 48 89 e5 mov %rsp,%rbp
84b: e8 00 00 00 00 callq 850 <neigh_release+0x10>
850: 5d pop %rbp
851: c3 retq
strlcpy(drvinfo->driver, "vxlan", sizeof(drvinfo->driver));
852: 0f 1f 40 00 nopl 0x0(%rax)
856: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
85d: 00 00 00
0000000000000860 <vxlan_gro_complete>:
860: e8 00 00 00 00 callq 865 <vxlan_gro_complete+0x5>
865: 55 push %rbp
866: 48 89 f7 mov %rsi,%rdi
}
869: 8d 72 08 lea 0x8(%rdx),%esi
86c: 48 89 e5 mov %rsp,%rbp
86f: e8 00 00 00 00 callq 874 <vxlan_gro_complete+0x14>
* returns true if the result is 0, or false for all other
* cases.
*/
static __always_inline bool atomic_dec_and_test(atomic_t *v)
{
GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e);
874: 5d pop %rbp
875: c3 retq
876: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
87d: 00 00 00
0000000000000880 <vxlan_init>:
static inline void neigh_release(struct neighbour *neigh)
{
if (atomic_dec_and_test(&neigh->refcnt))
neigh_destroy(neigh);
}
880: e8 00 00 00 00 callq 885 <vxlan_init+0x5>
885: 55 push %rbp
886: ba c0 00 40 02 mov $0x24000c0,%edx
88b: be 08 00 00 00 mov $0x8,%esi
return pp;
}
static int vxlan_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff)
{
890: 48 89 e5 mov %rsp,%rbp
893: 41 54 push %r12
895: 53 push %rbx
896: 49 89 fc mov %rdi,%r12
/* Sets 'skb->inner_mac_header' since we are always called with
* 'skb->encapsulation' set.
*/
return eth_gro_complete(skb, nhoff + sizeof(struct vxlanhdr));
899: bf 20 00 00 00 mov $0x20,%edi
return pp;
}
static int vxlan_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff)
{
89e: e8 00 00 00 00 callq 8a3 <vxlan_init+0x23>
/* Sets 'skb->inner_mac_header' since we are always called with
* 'skb->encapsulation' set.
*/
return eth_gro_complete(skb, nhoff + sizeof(struct vxlanhdr));
8a3: 48 85 c0 test %rax,%rax
}
8a6: 74 38 je 8e0 <vxlan_init+0x60>
8a8: 48 89 c3 mov %rax,%rbx
8ab: ba ff ff ff ff mov $0xffffffff,%edx
spin_unlock(&vn->sock_lock);
}
/* Setup stats when device is created */
static int vxlan_init(struct net_device *dev)
{
8b0: 83 c2 01 add $0x1,%edx
8b3: be 00 01 00 00 mov $0x100,%esi
dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
8b8: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
8bf: 48 63 d2 movslq %edx,%rdx
spin_unlock(&vn->sock_lock);
}
/* Setup stats when device is created */
static int vxlan_init(struct net_device *dev)
{
8c2: e8 00 00 00 00 callq 8c7 <vxlan_init+0x47>
8c7: 3b 05 00 00 00 00 cmp 0x0(%rip),%eax # 8cd <vxlan_init+0x4d>
dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
8cd: 89 c2 mov %eax,%edx
8cf: 7c df jl 8b0 <vxlan_init+0x30>
8d1: 49 89 9c 24 88 04 00 mov %rbx,0x488(%r12)
8d8: 00
8d9: 31 c0 xor %eax,%eax
8db: 5b pop %rbx
8dc: 41 5c pop %r12
8de: 5d pop %rbp
8df: c3 retq
8e0: 5b pop %rbx
8e1: 49 c7 84 24 88 04 00 movq $0x0,0x488(%r12)
8e8: 00 00 00 00 00
8ed: b8 f4 ff ff ff mov $0xfffffff4,%eax
8f2: 41 5c pop %r12
8f4: 5d pop %rbp
8f5: c3 retq
8f6: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
8fd: 00 00 00
0000000000000900 <vxlan_dellink>:
900: e8 00 00 00 00 callq 905 <vxlan_dellink+0x5>
905: 55 push %rbp
906: 48 89 e5 mov %rsp,%rbp
if (!dev->tstats)
return -ENOMEM;
return 0;
909: 41 56 push %r14
}
90b: 41 55 push %r13
90d: 41 54 push %r12
90f: 53 push %rbx
910: 49 89 fd mov %rdi,%r13
}
/* Setup stats when device is created */
static int vxlan_init(struct net_device *dev)
{
dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
913: 48 8b 87 78 08 00 00 mov 0x878(%rdi),%rax
91a: 49 89 f6 mov %rsi,%r14
if (!dev->tstats)
return -ENOMEM;
91d: 48 8b 90 88 14 00 00 mov 0x1488(%rax),%rdx
return 0;
}
924: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # 92a <vxlan_dellink+0x2a>
92a: 83 e8 01 sub $0x1,%eax
92d: 48 98 cltq
92f: 48 8b 5c c2 18 mov 0x18(%rdx,%rax,8),%rbx
return vxlan_dev_configure(src_net, dev, &conf);
}
static void vxlan_dellink(struct net_device *dev, struct list_head *head)
{
934: 48 81 c3 10 08 00 00 add $0x810,%rbx
93b: 48 89 df mov %rbx,%rdi
93e: e8 00 00 00 00 callq 943 <vxlan_dellink+0x43>
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
943: 49 8b 85 48 08 00 00 mov 0x848(%r13),%rax
return vxlan_dev_configure(src_net, dev, &conf);
}
static void vxlan_dellink(struct net_device *dev, struct list_head *head)
{
94a: 48 85 c0 test %rax,%rax
})
static __always_inline
void __read_once_size(const volatile void *p, void *res, int size)
{
__READ_ONCE_SIZE;
94d: 74 24 je 973 <vxlan_dellink+0x73>
94f: 49 8b 95 40 08 00 00 mov 0x840(%r13),%rdx
956: 48 85 d2 test %rdx,%rdx
959: 48 89 10 mov %rdx,(%rax)
95c: 74 04 je 962 <vxlan_dellink+0x62>
95e: 48 89 42 08 mov %rax,0x8(%rdx)
raw_spin_lock_init(&(_lock)->rlock); \
} while (0)
static __always_inline void spin_lock(spinlock_t *lock)
{
raw_spin_lock(&lock->rlock);
962: 48 b8 00 02 00 00 00 movabs $0xdead000000000200,%rax
969: 00 ad de
96c: 49 89 85 48 08 00 00 mov %rax,0x848(%r13)
973: 48 89 df mov %rbx,%rdi
976: ff 14 25 00 00 00 00 callq *0x0
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
spin_lock(&vn->sock_lock);
if (!hlist_unhashed(&vxlan->hlist))
97d: 49 83 bd 30 09 00 00 cmpq $0x0,0x930(%r13)
984: 00
return !READ_ONCE(h->first);
}
static inline void __hlist_del(struct hlist_node *n)
{
struct hlist_node *next = n->next;
985: 41 bc ff ff ff ff mov $0xffffffff,%r12d
{
switch (size) {
case 1: *(volatile __u8 *)p = *(__u8 *)res; break;
case 2: *(volatile __u16 *)p = *(__u16 *)res; break;
case 4: *(volatile __u32 *)p = *(__u32 *)res; break;
case 8: *(volatile __u64 *)p = *(__u64 *)res; break;
98b: 0f 84 8a 00 00 00 je a1b <vxlan_dellink+0x11b>
struct hlist_node **pprev = n->pprev;
WRITE_ONCE(*pprev, next);
if (next)
next->pprev = pprev;
991: 41 8d 54 24 01 lea 0x1(%r12),%edx
* hlist_for_each_entry().
*/
static inline void hlist_del_rcu(struct hlist_node *n)
{
__hlist_del(n);
n->pprev = LIST_POISON2;
996: be 00 01 00 00 mov $0x100,%esi
99b: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
9a2: 48 63 d2 movslq %edx,%rdx
PVOP_VCALL2(pv_lock_ops.queued_spin_lock_slowpath, lock, val);
}
static __always_inline void pv_queued_spin_unlock(struct qspinlock *lock)
{
PVOP_VCALLEE1(pv_lock_ops.queued_spin_unlock, lock);
9a5: e8 00 00 00 00 callq 9aa <vxlan_dellink+0xaa>
9aa: 3b 05 00 00 00 00 cmp 0x0(%rip),%eax # 9b0 <vxlan_dellink+0xb0>
static inline void gro_cells_destroy(struct gro_cells *gcells)
{
int i;
if (!gcells->cells)
9b0: 41 89 c4 mov %eax,%r12d
9b3: 7d 4f jge a04 <vxlan_dellink+0x104>
9b5: 48 98 cltq
9b7: 49 8b 9d 30 09 00 00 mov 0x930(%r13),%rbx
9be: 48 03 1c c5 00 00 00 add 0x0(,%rax,8),%rbx
9c5: 00
9c6: 48 8d 7b 18 lea 0x18(%rbx),%rdi
9ca: e8 00 00 00 00 callq 9cf <vxlan_dellink+0xcf>
9cf: 48 8b 3b mov (%rbx),%rdi
9d2: 48 39 fb cmp %rdi,%rbx
9d5: 74 ba je 991 <vxlan_dellink+0x91>
9d7: 48 85 ff test %rdi,%rdi
return;
for_each_possible_cpu(i) {
9da: 74 b5 je 991 <vxlan_dellink+0x91>
9dc: 83 6b 10 01 subl $0x1,0x10(%rbx)
9e0: 48 8b 17 mov (%rdi),%rdx
9e3: 48 8b 47 08 mov 0x8(%rdi),%rax
struct gro_cell *cell = per_cpu_ptr(gcells->cells, i);
9e7: 48 c7 07 00 00 00 00 movq $0x0,(%rdi)
9ee: 48 c7 47 08 00 00 00 movq $0x0,0x8(%rdi)
9f5: 00
netif_napi_del(&cell->napi);
9f6: 48 89 42 08 mov %rax,0x8(%rdx)
9fa: 48 89 10 mov %rdx,(%rax)
9fd: e8 00 00 00 00 callq a02 <vxlan_dellink+0x102>
*/
static inline struct sk_buff *skb_peek(const struct sk_buff_head *list_)
{
struct sk_buff *skb = list_->next;
if (skb == (struct sk_buff *)list_)
a02: eb cb jmp 9cf <vxlan_dellink+0xcf>
a04: 49 8b bd 30 09 00 00 mov 0x930(%r13),%rdi
*/
struct sk_buff *skb_dequeue(struct sk_buff_head *list);
static inline struct sk_buff *__skb_dequeue(struct sk_buff_head *list)
{
struct sk_buff *skb = skb_peek(list);
if (skb)
a0b: e8 00 00 00 00 callq a10 <vxlan_dellink+0x110>
static inline void __skb_unlink(struct sk_buff *skb, struct sk_buff_head *list)
{
struct sk_buff *next, *prev;
list->qlen--;
next = skb->next;
a10: 49 c7 85 30 09 00 00 movq $0x0,0x930(%r13)
a17: 00 00 00 00
prev = skb->prev;
skb->next = skb->prev = NULL;
a1b: 49 8b 85 58 08 00 00 mov 0x858(%r13),%rax
a22: 49 8b 95 50 08 00 00 mov 0x850(%r13),%rdx
next->prev = prev;
a29: 4c 89 f6 mov %r14,%rsi
prev->next = next;
a2c: 4c 89 ef mov %r13,%rdi
void skb_queue_purge(struct sk_buff_head *list);
static inline void __skb_queue_purge(struct sk_buff_head *list)
{
struct sk_buff *skb;
while ((skb = __skb_dequeue(list)) != NULL)
kfree_skb(skb);
a2f: 48 89 42 08 mov %rax,0x8(%rdx)
a33: 48 89 10 mov %rdx,(%rax)
__skb_queue_purge(&cell->napi_skbs);
}
free_percpu(gcells->cells);
a36: 48 b8 00 01 00 00 00 movabs $0xdead000000000100,%rax
a3d: 00 ad de
gcells->cells = NULL;
a40: 49 89 85 50 08 00 00 mov %rax,0x850(%r13)
a47: 48 b8 00 02 00 00 00 movabs $0xdead000000000200,%rax
a4e: 00 ad de
__list_del(entry->prev, entry->next);
}
static inline void list_del(struct list_head *entry)
{
__list_del(entry->prev, entry->next);
a51: 49 89 85 58 08 00 00 mov %rax,0x858(%r13)
a58: e8 00 00 00 00 callq a5d <vxlan_dellink+0x15d>
hlist_del_rcu(&vxlan->hlist);
spin_unlock(&vn->sock_lock);
gro_cells_destroy(&vxlan->gro_cells);
list_del(&vxlan->next);
unregister_netdevice_queue(dev, head);
a5d: 5b pop %rbx
a5e: 41 5c pop %r12
* This is only for internal list manipulation where we know
* the prev/next entries already!
*/
static inline void __list_del(struct list_head * prev, struct list_head * next)
{
next->prev = prev;
a60: 41 5d pop %r13
a62: 41 5e pop %r14
a64: 5d pop %rbp
a65: c3 retq
}
static inline void list_del(struct list_head *entry)
{
__list_del(entry->prev, entry->next);
entry->next = LIST_POISON1;
a66: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
a6d: 00 00 00
0000000000000a70 <vxlan6_get_route>:
a70: e8 00 00 00 00 callq a75 <vxlan6_get_route+0x5>
a75: 55 push %rbp
a76: 48 89 e5 mov %rsp,%rbp
entry->prev = LIST_POISON2;
a79: 41 57 push %r15
a7b: 41 56 push %r14
a7d: 41 55 push %r13
a7f: 41 54 push %r12
a81: 49 89 f7 mov %rsi,%r15
a84: 53 push %rbx
a85: 89 cb mov %ecx,%ebx
a87: 49 89 fe mov %rdi,%r14
a8a: 48 83 ec 68 sub $0x68,%rsp
}
a8e: 8b b6 b4 00 00 00 mov 0xb4(%rsi),%esi
a94: 4c 8b 65 10 mov 0x10(%rbp),%r12
a98: 65 48 8b 0c 25 28 00 mov %gs:0x28,%rcx
a9f: 00 00
__be32 label,
const struct in6_addr *daddr,
struct in6_addr *saddr,
struct dst_cache *dst_cache,
const struct ip_tunnel_info *info)
{
aa1: 48 89 4c 24 60 mov %rcx,0x60(%rsp)
aa6: 31 c9 xor %ecx,%ecx
aa8: 4c 8b 6d 18 mov 0x18(%rbp),%r13
aac: 48 8b 45 20 mov 0x20(%rbp),%rax
ab0: 85 f6 test %esi,%esi
ab2: 75 51 jne b05 <vxlan6_get_route+0x95>
ab4: 48 85 c0 test %rax,%rax
ab7: 74 48 je b01 <vxlan6_get_route+0x91>
ab9: f6 40 28 20 testb $0x20,0x28(%rax)
abd: 75 46 jne b05 <vxlan6_get_route+0x95>
abf: 4c 89 e6 mov %r12,%rsi
ac2: 4c 89 ef mov %r13,%rdi
ac5: 4c 89 0c 24 mov %r9,(%rsp)
ac9: 44 89 44 24 08 mov %r8d,0x8(%rsp)
ace: 89 54 24 0c mov %edx,0xc(%rsp)
ad2: e8 00 00 00 00 callq ad7 <vxlan6_get_route+0x67>
ad7: 48 85 c0 test %rax,%rax
ada: 48 89 44 24 10 mov %rax,0x10(%rsp)
adf: 8b 54 24 0c mov 0xc(%rsp),%edx
static inline bool
ip_tunnel_dst_cache_usable(const struct sk_buff *skb,
const struct ip_tunnel_info *info)
{
if (skb->mark)
ae3: 44 8b 44 24 08 mov 0x8(%rsp),%r8d
return false;
if (!info)
ae8: 4c 8b 0c 24 mov (%rsp),%r9
return true;
if (info->key.tun_flags & TUNNEL_NOCACHE)
aec: 0f 85 ac 00 00 00 jne b9e <vxlan6_get_route+0x12e>
int err;
if (tos && !info)
use_cache = false;
if (use_cache) {
ndst = dst_cache_get_ip6(dst_cache, saddr);
af2: 41 8b b7 b4 00 00 00 mov 0xb4(%r15),%esi
af9: 41 bf 01 00 00 00 mov $0x1,%r15d
aff: eb 07 jmp b08 <vxlan6_get_route+0x98>
b01: 84 db test %bl,%bl
b03: 74 ba je abf <vxlan6_get_route+0x4f>
b05: 45 31 ff xor %r15d,%r15d
if (ndst)
b08: 4c 8d 54 24 18 lea 0x18(%rsp),%r10
int err;
if (tos && !info)
use_cache = false;
if (use_cache) {
ndst = dst_cache_get_ip6(dst_cache, saddr);
b0d: 31 c0 xor %eax,%eax
if (ndst)
b0f: b9 09 00 00 00 mov $0x9,%ecx
b14: 83 e3 1e and $0x1e,%ebx
b17: 4c 89 d7 mov %r10,%rdi
b1a: c1 e3 14 shl $0x14,%ebx
b1d: f3 48 ab rep stos %rax,%es:(%rdi)
b20: 49 8b 01 mov (%r9),%rax
b23: 89 54 24 18 mov %edx,0x18(%rsp)
b27: 0f cb bswap %ebx
b29: 49 8b 51 08 mov 0x8(%r9),%rdx
b2d: 41 09 d8 or %ebx,%r8d
b30: 89 74 24 20 mov %esi,0x20(%rsp)
bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
struct dst_entry *ndst;
struct flowi6 fl6;
int err;
if (tos && !info)
b34: 44 89 44 24 58 mov %r8d,0x58(%rsp)
ndst = dst_cache_get_ip6(dst_cache, saddr);
if (ndst)
return ndst;
}
memset(&fl6, 0, sizeof(fl6));
b39: c6 44 24 26 11 movb $0x11,0x26(%rsp)
b3e: 4c 89 d1 mov %r10,%rcx
b41: 48 89 44 24 38 mov %rax,0x38(%rsp)
return ntohl(flowinfo & IPV6_TCLASS_MASK) >> IPV6_TCLASS_SHIFT;
}
static inline __be32 ip6_make_flowinfo(unsigned int tclass, __be32 flowlabel)
{
return htonl(tclass << IPV6_TCLASS_SHIFT) | flowlabel;
b46: 49 8b 04 24 mov (%r12),%rax
b4a: 48 89 54 24 40 mov %rdx,0x40(%rsp)
b4f: 49 8b 54 24 08 mov 0x8(%r12),%rdx
fl6.flowi6_oif = oif;
b54: 49 8b 7e 38 mov 0x38(%r14),%rdi
b58: 48 89 44 24 48 mov %rax,0x48(%rsp)
fl6.daddr = *daddr;
fl6.saddr = *saddr;
fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tos), label);
b5d: 49 8b 46 28 mov 0x28(%r14),%rax
fl6.flowi6_mark = skb->mark;
b61: 48 89 54 24 50 mov %rdx,0x50(%rsp)
memset(&fl6, 0, sizeof(fl6));
fl6.flowi6_oif = oif;
fl6.daddr = *daddr;
fl6.saddr = *saddr;
fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tos), label);
b66: 48 8d 54 24 10 lea 0x10(%rsp),%rdx
fl6.flowi6_mark = skb->mark;
fl6.flowi6_proto = IPPROTO_UDP;
b6b: 48 8b 40 10 mov 0x10(%rax),%rax
err = ipv6_stub->ipv6_dst_lookup(vxlan->net,
b6f: 48 8b 70 20 mov 0x20(%rax),%rsi
return ndst;
}
memset(&fl6, 0, sizeof(fl6));
fl6.flowi6_oif = oif;
fl6.daddr = *daddr;
b73: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # b7a <vxlan6_get_route+0x10a>
b7a: ff 50 10 callq *0x10(%rax)
b7d: 85 c0 test %eax,%eax
fl6.saddr = *saddr;
b7f: 78 3c js bbd <vxlan6_get_route+0x14d>
b81: 48 8b 44 24 48 mov 0x48(%rsp),%rax
fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tos), label);
fl6.flowi6_mark = skb->mark;
fl6.flowi6_proto = IPPROTO_UDP;
err = ipv6_stub->ipv6_dst_lookup(vxlan->net,
b86: 48 8b 54 24 50 mov 0x50(%rsp),%rdx
}
memset(&fl6, 0, sizeof(fl6));
fl6.flowi6_oif = oif;
fl6.daddr = *daddr;
fl6.saddr = *saddr;
b8b: 45 84 ff test %r15b,%r15b
fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tos), label);
fl6.flowi6_mark = skb->mark;
fl6.flowi6_proto = IPPROTO_UDP;
err = ipv6_stub->ipv6_dst_lookup(vxlan->net,
vxlan->vn6_sock->sock->sk,
b8e: 49 89 04 24 mov %rax,(%r12)
}
memset(&fl6, 0, sizeof(fl6));
fl6.flowi6_oif = oif;
fl6.daddr = *daddr;
fl6.saddr = *saddr;
b92: 49 89 54 24 08 mov %rdx,0x8(%r12)
fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tos), label);
fl6.flowi6_mark = skb->mark;
fl6.flowi6_proto = IPPROTO_UDP;
err = ipv6_stub->ipv6_dst_lookup(vxlan->net,
b97: 75 28 jne bc1 <vxlan6_get_route+0x151>
b99: 48 8b 44 24 10 mov 0x10(%rsp),%rax
b9e: 48 8b 4c 24 60 mov 0x60(%rsp),%rcx
ba3: 65 48 33 0c 25 28 00 xor %gs:0x28,%rcx
baa: 00 00
bac: 75 25 jne bd3 <vxlan6_get_route+0x163>
vxlan->vn6_sock->sock->sk,
&ndst, &fl6);
if (err < 0)
bae: 48 83 c4 68 add $0x68,%rsp
return ERR_PTR(err);
*saddr = fl6.saddr;
bb2: 5b pop %rbx
bb3: 41 5c pop %r12
bb5: 41 5d pop %r13
bb7: 41 5e pop %r14
bb9: 41 5f pop %r15
if (use_cache)
bbb: 5d pop %rbp
bbc: c3 retq
bbd: 48 98 cltq
vxlan->vn6_sock->sock->sk,
&ndst, &fl6);
if (err < 0)
return ERR_PTR(err);
*saddr = fl6.saddr;
bbf: eb dd jmp b9e <vxlan6_get_route+0x12e>
bc1: 48 8b 74 24 10 mov 0x10(%rsp),%rsi
bc6: 4c 89 e2 mov %r12,%rdx
if (use_cache)
dst_cache_set_ip6(dst_cache, ndst, saddr);
return ndst;
bc9: 4c 89 ef mov %r13,%rdi
bcc: e8 00 00 00 00 callq bd1 <vxlan6_get_route+0x161>
}
bd1: eb c6 jmp b99 <vxlan6_get_route+0x129>
bd3: e8 00 00 00 00 callq bd8 <vxlan6_get_route+0x168>
bd8: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
bdf: 00
0000000000000be0 <__vxlan_sock_release_prep>:
be0: e8 00 00 00 00 callq be5 <__vxlan_sock_release_prep+0x5>
be5: 48 85 ff test %rdi,%rdi
be8: 0f 84 92 00 00 00 je c80 <__vxlan_sock_release_prep+0xa0>
#define IS_ERR_VALUE(x) unlikely((unsigned long)(void *)(x) >= (unsigned long)-MAX_ERRNO)
static inline void * __must_check ERR_PTR(long error)
{
return (void *) error;
bee: f0 ff 8f 18 20 00 00 lock decl 0x2018(%rdi)
if (err < 0)
return ERR_PTR(err);
*saddr = fl6.saddr;
if (use_cache)
dst_cache_set_ip6(dst_cache, ndst, saddr);
bf5: 74 03 je bfa <__vxlan_sock_release_prep+0x1a>
bf7: 31 c0 xor %eax,%eax
bf9: c3 retq
bfa: 55 push %rbp
bfb: 48 89 e5 mov %rsp,%rbp
bfe: 41 54 push %r12
c00: 53 push %rbx
c01: 48 8b 47 10 mov 0x10(%rdi),%rax
return ndst;
}
c05: 48 89 fb mov %rdi,%rbx
c08: 48 8b 40 20 mov 0x20(%rax),%rax
c0c: 48 8b 40 30 mov 0x30(%rax),%rax
return false;
}
static bool __vxlan_sock_release_prep(struct vxlan_sock *vs)
{
c10: 48 8b 90 88 14 00 00 mov 0x1488(%rax),%rdx
struct vxlan_net *vn;
if (!vs)
c17: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # c1d <__vxlan_sock_release_prep+0x3d>
c1d: 83 e8 01 sub $0x1,%eax
c20: 48 98 cltq
c22: 4c 8b 64 c2 18 mov 0x18(%rdx,%rax,8),%r12
return false;
if (!atomic_dec_and_test(&vs->refcnt))
return false;
c27: 49 81 c4 10 08 00 00 add $0x810,%r12
return false;
}
static bool __vxlan_sock_release_prep(struct vxlan_sock *vs)
{
c2e: 4c 89 e7 mov %r12,%rdi
if (!vs)
return false;
if (!atomic_dec_and_test(&vs->refcnt))
return false;
vn = net_generic(sock_net(vs->sock->sk), vxlan_net_id);
c31: e8 00 00 00 00 callq c36 <__vxlan_sock_release_prep+0x56>
c36: 48 8b 03 mov (%rbx),%rax
c39: 48 8b 53 08 mov 0x8(%rbx),%rdx
c3d: 48 85 c0 test %rax,%rax
})
static __always_inline
void __read_once_size(const volatile void *p, void *res, int size)
{
__READ_ONCE_SIZE;
c40: 48 89 02 mov %rax,(%rdx)
c43: 74 04 je c49 <__vxlan_sock_release_prep+0x69>
c45: 48 89 50 08 mov %rdx,0x8(%rax)
c49: 8b b3 1c 20 00 00 mov 0x201c(%rbx),%esi
c4f: 48 8b 7b 10 mov 0x10(%rbx),%rdi
c53: 48 b8 00 02 00 00 00 movabs $0xdead000000000200,%rax
c5a: 00 ad de
c5d: 48 89 43 08 mov %rax,0x8(%rbx)
c61: c1 ee 0d shr $0xd,%esi
c64: 83 e6 02 and $0x2,%esi
return !READ_ONCE(h->first);
}
static inline void __hlist_del(struct hlist_node *n)
{
struct hlist_node *next = n->next;
c67: e8 00 00 00 00 callq c6c <__vxlan_sock_release_prep+0x8c>
struct hlist_node **pprev = n->pprev;
c6c: 4c 89 e7 mov %r12,%rdi
WRITE_ONCE(*pprev, next);
if (next)
c6f: ff 14 25 00 00 00 00 callq *0x0
next->pprev = pprev;
c76: b8 01 00 00 00 mov $0x1,%eax
spin_lock(&vn->sock_lock);
hlist_del_rcu(&vs->hlist);
udp_tunnel_notify_del_rx_port(vs->sock,
c7b: 5b pop %rbx
c7c: 41 5c pop %r12
c7e: 5d pop %rbp
c7f: c3 retq
c80: 31 c0 xor %eax,%eax
c82: c3 retq
c83: 0f 1f 00 nopl (%rax)
c86: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
c8d: 00 00 00
0000000000000c90 <vxlan_sock_release.isra.44>:
c90: e8 00 00 00 00 callq c95 <vxlan_sock_release.isra.44+0x5>
c95: 55 push %rbp
c96: 48 89 e5 mov %rsp,%rbp
c99: 41 56 push %r14
c9b: 41 55 push %r13
c9d: 41 54 push %r12
c9f: 49 89 fc mov %rdi,%r12
ca2: 53 push %rbx
ca3: 48 8b 3f mov (%rdi),%rdi
static inline void __raw_spin_unlock(raw_spinlock_t *lock)
{
spin_release(&lock->dep_map, 1, _RET_IP_);
do_raw_spin_unlock(lock);
preempt_enable();
ca6: 48 89 f3 mov %rsi,%rbx
ca9: e8 32 ff ff ff callq be0 <__vxlan_sock_release_prep>
UDP_TUNNEL_TYPE_VXLAN_GPE :
UDP_TUNNEL_TYPE_VXLAN);
spin_unlock(&vn->sock_lock);
return true;
}
cae: 48 8b 3b mov (%rbx),%rdi
static bool __vxlan_sock_release_prep(struct vxlan_sock *vs)
{
struct vxlan_net *vn;
if (!vs)
return false;
cb1: 41 89 c6 mov %eax,%r14d
cb4: e8 27 ff ff ff callq be0 <__vxlan_sock_release_prep>
cb9: 41 89 c5 mov %eax,%r13d
cbc: e8 00 00 00 00 callq cc1 <vxlan_sock_release.isra.44+0x31>
spin_unlock(&vn->sock_lock);
return true;
}
static void vxlan_sock_release(struct vxlan_dev *vxlan)
cc1: 45 84 f6 test %r14b,%r14b
cc4: 75 22 jne ce8 <vxlan_sock_release.isra.44+0x58>
cc6: 45 84 ed test %r13b,%r13b
cc9: 74 14 je cdf <vxlan_sock_release.isra.44+0x4f>
ccb: 48 8b 03 mov (%rbx),%rax
cce: 48 8b 78 10 mov 0x10(%rax),%rdi
cd2: e8 00 00 00 00 callq cd7 <vxlan_sock_release.isra.44+0x47>
cd7: 48 8b 3b mov (%rbx),%rdi
{
bool ipv4 = __vxlan_sock_release_prep(vxlan->vn4_sock);
cda: e8 00 00 00 00 callq cdf <vxlan_sock_release.isra.44+0x4f>
#if IS_ENABLED(CONFIG_IPV6)
bool ipv6 = __vxlan_sock_release_prep(vxlan->vn6_sock);
cdf: 5b pop %rbx
ce0: 41 5c pop %r12
return true;
}
static void vxlan_sock_release(struct vxlan_dev *vxlan)
{
bool ipv4 = __vxlan_sock_release_prep(vxlan->vn4_sock);
ce2: 41 5d pop %r13
#if IS_ENABLED(CONFIG_IPV6)
bool ipv6 = __vxlan_sock_release_prep(vxlan->vn6_sock);
ce4: 41 5e pop %r14
ce6: 5d pop %rbp
ce7: c3 retq
ce8: 49 8b 04 24 mov (%r12),%rax
#endif
synchronize_net();
cec: 48 8b 78 10 mov 0x10(%rax),%rdi
cf0: e8 00 00 00 00 callq cf5 <vxlan_sock_release.isra.44+0x65>
if (ipv4) {
cf5: 49 8b 3c 24 mov (%r12),%rdi
udp_tunnel_sock_release(vxlan->vn4_sock->sock);
kfree(vxlan->vn4_sock);
}
#if IS_ENABLED(CONFIG_IPV6)
if (ipv6) {
cf9: e8 00 00 00 00 callq cfe <vxlan_sock_release.isra.44+0x6e>
udp_tunnel_sock_release(vxlan->vn6_sock->sock);
cfe: eb c6 jmp cc6 <vxlan_sock_release.isra.44+0x36>
0000000000000d00 <vxlan_netdevice_event>:
d00: e8 00 00 00 00 callq d05 <vxlan_netdevice_event+0x5>
d05: 55 push %rbp
d06: 48 89 e5 mov %rsp,%rbp
kfree(vxlan->vn6_sock);
d09: 41 56 push %r14
d0b: 41 55 push %r13
d0d: 41 54 push %r12
}
#endif
}
d0f: 53 push %rbx
d10: 48 83 ec 18 sub $0x18,%rsp
d14: 4c 8b 22 mov (%rdx),%r12
d17: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
d1e: 00 00
#endif
synchronize_net();
if (ipv4) {
udp_tunnel_sock_release(vxlan->vn4_sock->sock);
d20: 48 89 45 d8 mov %rax,-0x28(%rbp)
d24: 31 c0 xor %eax,%eax
kfree(vxlan->vn4_sock);
d26: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # d2c <vxlan_netdevice_event+0x2c>
d2c: 49 8b 94 24 80 04 00 mov 0x480(%r12),%rdx
d33: 00
unregister_netdevice_many(&list_kill);
}
static int vxlan_netdevice_event(struct notifier_block *unused,
unsigned long event, void *ptr)
{
d34: 83 e8 01 sub $0x1,%eax
d37: 48 83 fe 06 cmp $0x6,%rsi
d3b: 48 8b 8a 88 14 00 00 mov 0x1488(%rdx),%rcx
d42: 0f 84 8a 00 00 00 je dd2 <vxlan_netdevice_event+0xd2>
d48: 48 83 fe 1c cmp $0x1c,%rsi
d4c: 74 22 je d70 <vxlan_netdevice_event+0x70>
d4e: 31 c0 xor %eax,%eax
d50: 48 8b 4d d8 mov -0x28(%rbp),%rcx
d54: 65 48 33 0c 25 28 00 xor %gs:0x28,%rcx
d5b: 00 00
d5d: 0f 85 d7 00 00 00 jne e3a <vxlan_netdevice_event+0x13a>
d63: 48 83 c4 18 add $0x18,%rsp
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
struct vxlan_net *vn = net_generic(dev_net(dev), vxlan_net_id);
if (event == NETDEV_UNREGISTER)
d67: 5b pop %rbx
d68: 41 5c pop %r12
d6a: 41 5d pop %r13
d6c: 41 5e pop %r14
d6e: 5d pop %rbp
d6f: c3 retq
d70: 48 8b 92 88 14 00 00 mov 0x1488(%rdx),%rdx
d77: 48 98 cltq
vxlan_handle_lowerdev_unregister(vn, dev);
else if (event == NETDEV_UDP_TUNNEL_PUSH_INFO)
d79: 4c 8b 6c c2 18 mov 0x18(%rdx,%rax,8),%r13
vxlan_push_rx_ports(dev);
return NOTIFY_DONE;
}
d7e: 4d 8d b5 10 08 00 00 lea 0x810(%r13),%r14
d85: 49 83 c5 10 add $0x10,%r13
d89: 4c 89 f7 mov %r14,%rdi
d8c: e8 00 00 00 00 callq d91 <vxlan_netdevice_event+0x91>
d91: 49 8b 5d 00 mov 0x0(%r13),%rbx
d95: 48 85 db test %rbx,%rbx
d98: 74 20 je dba <vxlan_netdevice_event+0xba>
d9a: 8b 93 1c 20 00 00 mov 0x201c(%rbx),%edx
da0: 48 8b 73 10 mov 0x10(%rbx),%rsi
da4: 4c 89 e7 mov %r12,%rdi
da7: c1 ea 0d shr $0xd,%edx
daa: 83 e2 02 and $0x2,%edx
dad: e8 00 00 00 00 callq db2 <vxlan_netdevice_event+0xb2>
db2: 48 8b 1b mov (%rbx),%rbx
db5: 48 85 db test %rbx,%rbx
db8: 75 e0 jne d9a <vxlan_netdevice_event+0x9a>
dba: 49 83 c5 08 add $0x8,%r13
dbe: 4d 39 ee cmp %r13,%r14
dc1: 75 ce jne d91 <vxlan_netdevice_event+0x91>
dc3: 4c 89 f7 mov %r14,%rdi
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
unsigned int i;
spin_lock(&vn->sock_lock);
for (i = 0; i < PORT_HASH_SIZE; ++i) {
hlist_for_each_entry_rcu(vs, &vn->sock_list[i], hlist)
dc6: ff 14 25 00 00 00 00 callq *0x0
udp_tunnel_push_rx_port(dev, vs->sock,
dcd: e9 7c ff ff ff jmpq d4e <vxlan_netdevice_event+0x4e>
dd2: 48 98 cltq
dd4: 4c 8d 75 c8 lea -0x38(%rbp),%r14
dd8: 4c 8b 6c c1 18 mov 0x18(%rcx,%rax,8),%r13
ddd: 4c 89 75 c8 mov %r14,-0x38(%rbp)
de1: 4c 89 75 d0 mov %r14,-0x30(%rbp)
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
unsigned int i;
spin_lock(&vn->sock_lock);
for (i = 0; i < PORT_HASH_SIZE; ++i) {
hlist_for_each_entry_rcu(vs, &vn->sock_list[i], hlist)
de5: 49 8b 45 00 mov 0x0(%r13),%rax
de9: 48 8b 08 mov (%rax),%rcx
dec: 49 39 c5 cmp %rax,%r13
struct net *net = dev_net(dev);
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
unsigned int i;
spin_lock(&vn->sock_lock);
for (i = 0; i < PORT_HASH_SIZE; ++i) {
def: 48 8d 50 f0 lea -0x10(%rax),%rdx
df3: 48 8d 59 f0 lea -0x10(%rcx),%rbx
df7: 75 19 jne e12 <vxlan_netdevice_event+0x112>
df9: eb 32 jmp e2d <vxlan_netdevice_event+0x12d>
dfb: 48 8b 43 10 mov 0x10(%rbx),%rax
dff: 48 8d 4b 10 lea 0x10(%rbx),%rcx
e03: 48 89 da mov %rbx,%rdx
static void vxlan_handle_lowerdev_unregister(struct vxlan_net *vn,
struct net_device *dev)
{
struct vxlan_dev *vxlan, *next;
LIST_HEAD(list_kill);
e06: 48 83 e8 10 sub $0x10,%rax
e0a: 49 39 cd cmp %rcx,%r13
e0d: 74 1e je e2d <vxlan_netdevice_event+0x12d>
e0f: 48 89 c3 mov %rax,%rbx
e12: 41 8b 84 24 28 01 00 mov 0x128(%r12),%eax
e19: 00
list_for_each_entry_safe(vxlan, next, &vn->vxlan_list, next) {
e1a: 39 42 64 cmp %eax,0x64(%rdx)
e1d: 75 dc jne dfb <vxlan_netdevice_event+0xfb>
e1f: 48 8b 7a 30 mov 0x30(%rdx),%rdi
e23: 4c 89 f6 mov %r14,%rsi
e26: e8 d5 fa ff ff callq 900 <vxlan_dellink>
e2b: eb ce jmp dfb <vxlan_netdevice_event+0xfb>
e2d: 4c 89 f7 mov %r14,%rdi
e30: e8 00 00 00 00 callq e35 <vxlan_netdevice_event+0x135>
e35: e9 14 ff ff ff jmpq d4e <vxlan_netdevice_event+0x4e>
e3a: e8 00 00 00 00 callq e3f <vxlan_netdevice_event+0x13f>
e3f: 90 nop
0000000000000e40 <__vxlan_sock_add>:
e40: e8 00 00 00 00 callq e45 <__vxlan_sock_add+0x5>
* and we loose the carrier due to module unload
* we also need to remove vxlan device. In other
* cases, it's not necessary and remote_ifindex
* is 0 here, so no matches.
*/
if (dst->remote_ifindex == dev->ifindex)
e45: 55 push %rbp
e46: 48 89 e5 mov %rsp,%rbp
e49: 41 57 push %r15
e4b: 41 56 push %r14
e4d: 41 55 push %r13
vxlan_dellink(vxlan->dev, &list_kill);
e4f: 41 54 push %r12
e51: 41 89 f5 mov %esi,%r13d
e54: 53 push %rbx
e55: 48 89 fb mov %rdi,%rbx
e58: 48 81 ec 88 00 00 00 sub $0x88,%rsp
}
unregister_netdevice_many(&list_kill);
e5f: 4c 8b 77 38 mov 0x38(%rdi),%r14
e63: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
e6a: 00 00
vxlan_handle_lowerdev_unregister(vn, dev);
else if (event == NETDEV_UDP_TUNNEL_PUSH_INFO)
vxlan_push_rx_ports(dev);
return NOTIFY_DONE;
}
e6c: 48 89 45 d0 mov %rax,-0x30(%rbp)
return vs;
}
static int __vxlan_sock_add(struct vxlan_dev *vxlan, bool ipv6)
{
e70: 31 c0 xor %eax,%eax
e72: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # e78 <__vxlan_sock_add+0x38>
e78: 49 8b 96 88 14 00 00 mov 0x1488(%r14),%rdx
e7f: 83 e8 01 sub $0x1,%eax
e82: 80 bf 5c 01 00 00 00 cmpb $0x0,0x15c(%rdi)
e89: 0f 84 e6 01 00 00 je 1075 <__vxlan_sock_add+0x235>
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
e8f: 49 8b 96 88 14 00 00 mov 0x1488(%r14),%rdx
return vs;
}
static int __vxlan_sock_add(struct vxlan_dev *vxlan, bool ipv6)
{
e96: 48 98 cltq
e98: 8b 8b 98 00 00 00 mov 0x98(%rbx),%ecx
e9e: be c0 80 40 02 mov $0x24080c0,%esi
ea3: bf 20 20 00 00 mov $0x2020,%edi
ea8: 44 0f b7 a3 3c 01 00 movzwl 0x13c(%rbx),%r12d
eaf: 00
eb0: 48 8b 44 c2 18 mov 0x18(%rdx,%rax,8),%rax
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
struct vxlan_sock *vs = NULL;
if (!vxlan->cfg.no_share) {
eb5: ba 02 00 00 00 mov $0x2,%edx
eba: 89 8d 68 ff ff ff mov %ecx,-0x98(%rbp)
ec0: 48 89 85 58 ff ff ff mov %rax,-0xa8(%rbp)
ec7: e8 00 00 00 00 callq ecc <__vxlan_sock_add+0x8c>
return -EBUSY;
}
spin_unlock(&vn->sock_lock);
}
if (!vs)
vs = vxlan_socket_create(vxlan->net, ipv6,
ecc: 48 85 c0 test %rax,%rax
#endif
static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
{
unsigned int order = get_order(size);
return kmalloc_order_trace(size, flags, order);
ecf: 49 89 c7 mov %rax,%r15
ed2: 0f 84 d1 02 00 00 je 11a9 <__vxlan_sock_add+0x369>
vxlan->cfg.dst_port, vxlan->flags);
ed8: 48 8d 40 18 lea 0x18(%rax),%rax
edc: 49 8d 97 18 20 00 00 lea 0x2018(%r15),%rdx
ee3: 48 c7 00 00 00 00 00 movq $0x0,(%rax)
return -EBUSY;
}
spin_unlock(&vn->sock_lock);
}
if (!vs)
vs = vxlan_socket_create(vxlan->net, ipv6,
eea: 48 83 c0 08 add $0x8,%rax
eee: 48 39 c2 cmp %rax,%rdx
ef1: 75 f0 jne ee3 <__vxlan_sock_add+0xa3>
ef3: 48 8d 75 a4 lea -0x5c(%rbp),%rsi
ef7: 48 8d 7d a8 lea -0x58(%rbp),%rdi
efb: 31 c0 xor %eax,%eax
struct socket *sock;
unsigned int h;
struct udp_tunnel_sock_cfg tunnel_cfg;
vs = kzalloc(sizeof(*vs), GFP_KERNEL);
if (!vs)
efd: 48 c7 45 a4 00 00 00 movq $0x0,-0x5c(%rbp)
f04: 00
f05: 48 c7 45 c8 00 00 00 movq $0x0,-0x38(%rbp)
f0c: 00
f0d: 48 89 f1 mov %rsi,%rcx
f10: 48 29 f9 sub %rdi,%rcx
return ERR_PTR(-ENOMEM);
for (h = 0; h < VNI_HASH_SIZE; ++h)
INIT_HLIST_HEAD(&vs->vni_list[h]);
f13: 83 c1 2c add $0x2c,%ecx
f16: c1 e9 03 shr $0x3,%ecx
f19: 45 84 ed test %r13b,%r13b
f1c: f3 48 ab rep stos %rax,%es:(%rdi)
vs = kzalloc(sizeof(*vs), GFP_KERNEL);
if (!vs)
return ERR_PTR(-ENOMEM);
for (h = 0; h < VNI_HASH_SIZE; ++h)
f1f: 0f 85 bc 02 00 00 jne 11e1 <__vxlan_sock_add+0x3a1>
{
struct socket *sock;
struct udp_port_cfg udp_conf;
int err;
memset(&udp_conf, 0, sizeof(udp_conf));
f25: 48 8d 85 70 ff ff ff lea -0x90(%rbp),%rax
f2c: 4c 89 f7 mov %r14,%rdi
f2f: c6 45 a4 02 movb $0x2,-0x5c(%rbp)
f33: 66 44 89 65 c8 mov %r12w,-0x38(%rbp)
f38: 48 89 c2 mov %rax,%rdx
f3b: 48 89 85 60 ff ff ff mov %rax,-0xa0(%rbp)
f42: e8 00 00 00 00 callq f47 <__vxlan_sock_add+0x107>
f47: 85 c0 test %eax,%eax
if (ipv6) {
f49: 0f 88 16 03 00 00 js 1265 <__vxlan_sock_add+0x425>
f4f: 4c 8b 9d 70 ff ff ff mov -0x90(%rbp),%r11
static inline int udp_sock_create(struct net *net,
struct udp_port_cfg *cfg,
struct socket **sockp)
{
if (cfg->family == AF_INET)
return udp_sock_create4(net, cfg, sockp);
f56: 49 81 fb 00 f0 ff ff cmp $0xfffffffffffff000,%r11
f5d: 0f 87 c6 02 00 00 ja 1229 <__vxlan_sock_add+0x3e9>
udp_conf.ipv6_v6only = 1;
} else {
udp_conf.family = AF_INET;
}
udp_conf.local_udp_port = port;
f63: 8b 85 68 ff ff ff mov -0x98(%rbp),%eax
f69: 4d 89 5f 10 mov %r11,0x10(%r15)
f6d: 66 41 c1 c4 08 rol $0x8,%r12w
f72: 41 c7 87 18 20 00 00 movl $0x1,0x2018(%r15)
f79: 01 00 00 00
/* Open UDP socket */
err = udp_sock_create(net, &udp_conf, &sock);
if (err < 0)
f7d: 4c 89 9d 50 ff ff ff mov %r11,-0xb0(%rbp)
return ERR_PTR(err);
return sock;
f84: 25 00 7d 00 00 and $0x7d00,%eax
for (h = 0; h < VNI_HASH_SIZE; ++h)
INIT_HLIST_HEAD(&vs->vni_list[h]);
sock = vxlan_create_sock(net, ipv6, port, flags);
if (IS_ERR(sock)) {
f89: 41 89 87 1c 20 00 00 mov %eax,0x201c(%r15)
f90: 48 8b 85 58 ff ff ff mov -0xa8(%rbp),%rax
return ERR_CAST(sock);
}
vs->sock = sock;
atomic_set(&vs->refcnt, 1);
vs->flags = (flags & VXLAN_F_RCV_FLAGS);
f97: 48 05 10 08 00 00 add $0x810,%rax
/* Socket hash table head */
static inline struct hlist_head *vs_head(struct net *net, __be16 port)
{
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
return &vn->sock_list[hash_32(ntohs(port), PORT_HASH_BITS)];
f9d: 48 89 c7 mov %rax,%rdi
fa0: 48 89 85 68 ff ff ff mov %rax,-0x98(%rbp)
static __always_inline void __write_once_size(volatile void *p, void *res, int size)
{
switch (size) {
case 1: *(volatile __u8 *)p = *(__u8 *)res; break;
case 2: *(volatile __u16 *)p = *(__u16 *)res; break;
case 4: *(volatile __u32 *)p = *(__u32 *)res; break;
fa7: e8 00 00 00 00 callq fac <__vxlan_sock_add+0x16c>
fac: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # fb2 <__vxlan_sock_add+0x172>
PTR_ERR(sock));
kfree(vs);
return ERR_CAST(sock);
}
vs->sock = sock;
fb2: 49 8b 96 88 14 00 00 mov 0x1488(%r14),%rdx
atomic_set(&vs->refcnt, 1);
vs->flags = (flags & VXLAN_F_RCV_FLAGS);
fb9: 83 e8 01 sub $0x1,%eax
fbc: 48 98 cltq
fbe: 48 8b 54 c2 18 mov 0x18(%rdx,%rax,8),%rdx
fc3: 41 0f b7 c4 movzwl %r12w,%eax
fc7: 69 c0 47 86 c8 61 imul $0x61c88647,%eax,%eax
fcd: c1 e8 18 shr $0x18,%eax
fd0: 48 83 c0 02 add $0x2,%rax
fd4: 48 8d 0c c2 lea (%rdx,%rax,8),%rcx
fd8: 48 8b 04 c2 mov (%rdx,%rax,8),%rax
fdc: 49 89 4f 08 mov %rcx,0x8(%r15)
fe0: 49 89 07 mov %rax,(%r15)
})
static __always_inline
void __read_once_size(const volatile void *p, void *res, int size)
{
__READ_ONCE_SIZE;
fe3: 48 85 c0 test %rax,%rax
fe6: 4c 89 39 mov %r15,(%rcx)
fe9: 4c 8b 9d 50 ff ff ff mov -0xb0(%rbp),%r11
ff0: 74 04 je ff6 <__vxlan_sock_add+0x1b6>
ff2: 4c 89 78 08 mov %r15,0x8(%rax)
#define hash_32 hash_32_generic
#endif
static inline u32 hash_32_generic(u32 val, unsigned int bits)
{
/* High bits are more random, so use them. */
return __hash_32(val) >> (32 - bits);
ff6: 41 8b b7 1c 20 00 00 mov 0x201c(%r15),%esi
ffd: 4c 89 df mov %r11,%rdi
/* Socket hash table head */
static inline struct hlist_head *vs_head(struct net *net, __be16 port)
{
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
return &vn->sock_list[hash_32(ntohs(port), PORT_HASH_BITS)];
1000: 4c 89 9d 58 ff ff ff mov %r11,-0xa8(%rbp)
1007: c1 ee 0d shr $0xd,%esi
* list-traversal primitive must be guarded by rcu_read_lock().
*/
static inline void hlist_add_head_rcu(struct hlist_node *n,
struct hlist_head *h)
{
struct hlist_node *first = h->first;
100a: 83 e6 02 and $0x2,%esi
n->next = first;
n->pprev = &h->first;
100d: e8 00 00 00 00 callq 1012 <__vxlan_sock_add+0x1d2>
static inline void hlist_add_head_rcu(struct hlist_node *n,
struct hlist_head *h)
{
struct hlist_node *first = h->first;
n->next = first;
1012: 48 8b bd 68 ff ff ff mov -0x98(%rbp),%rdi
n->pprev = &h->first;
rcu_assign_pointer(hlist_first_rcu(h), n);
if (first)
1019: ff 14 25 00 00 00 00 callq *0x0
1020: 48 8b bd 60 ff ff ff mov -0xa0(%rbp),%rdi
atomic_set(&vs->refcnt, 1);
vs->flags = (flags & VXLAN_F_RCV_FLAGS);
spin_lock(&vn->sock_lock);
hlist_add_head_rcu(&vs->hlist, vs_head(net, port));
udp_tunnel_notify_add_rx_port(sock,
1027: 31 c0 xor %eax,%eax
1029: b9 06 00 00 00 mov $0x6,%ecx
102e: 4c 8b 9d 58 ff ff ff mov -0xa8(%rbp),%r11
1035: 48 8b 95 60 ff ff ff mov -0xa0(%rbp),%rdx
103c: f3 48 ab rep stos %rax,%es:(%rdi)
103f: 4c 89 de mov %r11,%rsi
1042: 4c 89 f7 mov %r14,%rdi
1045: 4c 89 bd 70 ff ff ff mov %r15,-0x90(%rbp)
104c: c6 85 78 ff ff ff 01 movb $0x1,-0x88(%rbp)
UDP_TUNNEL_TYPE_VXLAN_GPE :
UDP_TUNNEL_TYPE_VXLAN);
spin_unlock(&vn->sock_lock);
/* Mark socket as an encapsulation socket. */
memset(&tunnel_cfg, 0, sizeof(tunnel_cfg));
1053: 48 c7 45 80 00 00 00 movq $0x0,-0x80(%rbp)
105a: 00
105b: 48 c7 45 90 00 00 00 movq $0x0,-0x70(%rbp)
1062: 00
tunnel_cfg.encap_rcv = vxlan_rcv;
tunnel_cfg.encap_destroy = NULL;
tunnel_cfg.gro_receive = vxlan_gro_receive;
tunnel_cfg.gro_complete = vxlan_gro_complete;
setup_udp_tunnel_sock(net, sock, &tunnel_cfg);
1063: 48 c7 45 98 00 00 00 movq $0x0,-0x68(%rbp)
106a: 00
106b: e8 00 00 00 00 callq 1070 <__vxlan_sock_add+0x230>
1070: e9 80 00 00 00 jmpq 10f5 <__vxlan_sock_add+0x2b5>
UDP_TUNNEL_TYPE_VXLAN);
spin_unlock(&vn->sock_lock);
/* Mark socket as an encapsulation socket. */
memset(&tunnel_cfg, 0, sizeof(tunnel_cfg));
tunnel_cfg.sk_user_data = vs;
1075: 48 98 cltq
1077: 4c 8b 64 c2 18 mov 0x18(%rdx,%rax,8),%r12
tunnel_cfg.encap_type = 1;
107c: 49 81 c4 10 08 00 00 add $0x810,%r12
tunnel_cfg.encap_rcv = vxlan_rcv;
1083: 4c 89 e7 mov %r12,%rdi
1086: e8 00 00 00 00 callq 108b <__vxlan_sock_add+0x24b>
tunnel_cfg.encap_destroy = NULL;
tunnel_cfg.gro_receive = vxlan_gro_receive;
108b: 41 80 fd 01 cmp $0x1,%r13b
108f: 0f b7 93 3c 01 00 00 movzwl 0x13c(%rbx),%edx
tunnel_cfg.gro_complete = vxlan_gro_complete;
1096: 8b 8b 98 00 00 00 mov 0x98(%rbx),%ecx
setup_udp_tunnel_sock(net, sock, &tunnel_cfg);
109c: 19 f6 sbb %esi,%esi
109e: 48 8b 7b 38 mov 0x38(%rbx),%rdi
10a2: 83 e6 f8 and $0xfffffff8,%esi
10a5: 83 c6 0a add $0xa,%esi
10a8: e8 43 f0 ff ff callq f0 <vxlan_find_sock>
10ad: 48 85 c0 test %rax,%rax
10b0: 49 89 c7 mov %rax,%r15
10b3: 74 2d je 10e2 <__vxlan_sock_add+0x2a2>
10b5: 8b 88 18 20 00 00 mov 0x2018(%rax),%ecx
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
struct vxlan_sock *vs = NULL;
if (!vxlan->cfg.no_share) {
spin_lock(&vn->sock_lock);
vs = vxlan_find_sock(vxlan->net, ipv6 ? AF_INET6 : AF_INET,
10bb: 85 c9 test %ecx,%ecx
10bd: 0f 84 0d 01 00 00 je 11d0 <__vxlan_sock_add+0x390>
10c3: 48 8d b0 18 20 00 00 lea 0x2018(%rax),%rsi
10ca: 8d 51 01 lea 0x1(%rcx),%edx
10cd: 89 c8 mov %ecx,%eax
10cf: f0 41 0f b1 97 18 20 lock cmpxchg %edx,0x2018(%r15)
10d6: 00 00
10d8: 39 c1 cmp %eax,%ecx
10da: 89 c2 mov %eax,%edx
10dc: 0f 85 d3 00 00 00 jne 11b5 <__vxlan_sock_add+0x375>
10e2: 4c 89 e7 mov %r12,%rdi
10e5: ff 14 25 00 00 00 00 callq *0x0
static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
{
int c, old;
c = atomic_read(v);
for (;;) {
if (unlikely(c == (u)))
10ec: 4d 85 ff test %r15,%r15
10ef: 0f 84 78 01 00 00 je 126d <__vxlan_sock_add+0x42d>
return xadd(&v->counter, -i);
}
static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
{
return cmpxchg(&v->counter, old, new);
10f5: 4d 89 fb mov %r15,%r11
10f8: 49 81 fb 00 f0 ff ff cmp $0xfffffffffffff000,%r11
10ff: 0f 87 ab 00 00 00 ja 11b0 <__vxlan_sock_add+0x370>
1105: 45 84 ed test %r13b,%r13b
c = atomic_read(v);
for (;;) {
if (unlikely(c == (u)))
break;
old = atomic_cmpxchg((v), c, c + (a));
if (likely(old == c))
1108: 0f 84 92 00 00 00 je 11a0 <__vxlan_sock_add+0x360>
110e: 4c 89 7b 28 mov %r15,0x28(%rbx)
1112: 48 8b 43 38 mov 0x38(%rbx),%rax
1116: 44 8b 63 60 mov 0x60(%rbx),%r12d
111a: 48 8b 90 88 14 00 00 mov 0x1488(%rax),%rdx
spin_unlock(&vn->sock_lock);
return -EBUSY;
}
spin_unlock(&vn->sock_lock);
}
if (!vs)
1121: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # 1127 <__vxlan_sock_add+0x2e7>
1127: 83 e8 01 sub $0x1,%eax
vs = vxlan_socket_create(vxlan->net, ipv6,
vxlan->cfg.dst_port, vxlan->flags);
if (IS_ERR(vs))
112a: 48 98 cltq
112c: 4c 8b 6c c2 18 mov 0x18(%rdx,%rax,8),%r13
1131: 49 81 c5 10 08 00 00 add $0x810,%r13
return PTR_ERR(vs);
#if IS_ENABLED(CONFIG_IPV6)
if (ipv6)
1138: 4c 89 ef mov %r13,%rdi
113b: e8 00 00 00 00 callq 1140 <__vxlan_sock_add+0x300>
vxlan->vn6_sock = vs;
1140: 41 69 d4 47 86 c8 61 imul $0x61c88647,%r12d,%edx
}
static void vxlan_vs_add_dev(struct vxlan_sock *vs, struct vxlan_dev *vxlan)
{
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
__be32 vni = vxlan->default_dst.remote_vni;
1147: c1 ea 16 shr $0x16,%edx
114a: 48 83 c2 02 add $0x2,%rdx
114e: 49 8d 0c d7 lea (%r15,%rdx,8),%rcx
1152: 49 8b 44 d7 08 mov 0x8(%r15,%rdx,8),%rax
1157: 48 8d 71 08 lea 0x8(%rcx),%rsi
115b: 48 89 03 mov %rax,(%rbx)
115e: 48 89 73 08 mov %rsi,0x8(%rbx)
1162: 48 85 c0 test %rax,%rax
1165: 48 89 59 08 mov %rbx,0x8(%rcx)
1169: 74 04 je 116f <__vxlan_sock_add+0x32f>
116b: 48 89 58 08 mov %rbx,0x8(%rax)
116f: 4c 89 ef mov %r13,%rdi
1172: ff 14 25 00 00 00 00 callq *0x0
1179: 31 c0 xor %eax,%eax
#endif
/* Virtual Network hash table head */
static inline struct hlist_head *vni_head(struct vxlan_sock *vs, __be32 vni)
{
return &vs->vni_list[hash_32((__force u32)vni, VNI_HASH_BITS)];
117b: 48 8b 75 d0 mov -0x30(%rbp),%rsi
117f: 65 48 33 34 25 28 00 xor %gs:0x28,%rsi
1186: 00 00
1188: 0f 85 d2 00 00 00 jne 1260 <__vxlan_sock_add+0x420>
struct hlist_head *h)
{
struct hlist_node *first = h->first;
n->next = first;
n->pprev = &h->first;
118e: 48 81 c4 88 00 00 00 add $0x88,%rsp
{
switch (size) {
case 1: *(volatile __u8 *)p = *(__u8 *)res; break;
case 2: *(volatile __u16 *)p = *(__u16 *)res; break;
case 4: *(volatile __u32 *)p = *(__u32 *)res; break;
case 8: *(volatile __u64 *)p = *(__u64 *)res; break;
1195: 5b pop %rbx
1196: 41 5c pop %r12
1198: 41 5d pop %r13
rcu_assign_pointer(hlist_first_rcu(h), n);
if (first)
119a: 41 5e pop %r14
first->pprev = &n->next;
119c: 41 5f pop %r15
119e: 5d pop %rbp
119f: c3 retq
11a0: 4c 89 7b 20 mov %r15,0x20(%rbx)
11a4: e9 69 ff ff ff jmpq 1112 <__vxlan_sock_add+0x2d2>
vxlan->vn6_sock = vs;
else
#endif
vxlan->vn4_sock = vs;
vxlan_vs_add_dev(vs, vxlan);
return 0;
11a9: 49 c7 c7 f4 ff ff ff mov $0xfffffffffffffff4,%r15
}
11b0: 44 89 f8 mov %r15d,%eax
11b3: eb c6 jmp 117b <__vxlan_sock_add+0x33b>
11b5: 85 d2 test %edx,%edx
11b7: 74 17 je 11d0 <__vxlan_sock_add+0x390>
11b9: 8d 4a 01 lea 0x1(%rdx),%ecx
11bc: 89 d0 mov %edx,%eax
11be: f0 0f b1 0e lock cmpxchg %ecx,(%rsi)
11c2: 39 d0 cmp %edx,%eax
11c4: 0f 84 18 ff ff ff je 10e2 <__vxlan_sock_add+0x2a2>
11ca: 89 c2 mov %eax,%edx
11cc: 85 d2 test %edx,%edx
11ce: 75 e9 jne 11b9 <__vxlan_sock_add+0x379>
#if IS_ENABLED(CONFIG_IPV6)
if (ipv6)
vxlan->vn6_sock = vs;
else
#endif
vxlan->vn4_sock = vs;
11d0: 4c 89 e7 mov %r12,%rdi
11d3: ff 14 25 00 00 00 00 callq *0x0
unsigned int h;
struct udp_tunnel_sock_cfg tunnel_cfg;
vs = kzalloc(sizeof(*vs), GFP_KERNEL);
if (!vs)
return ERR_PTR(-ENOMEM);
11da: b8 f0 ff ff ff mov $0xfffffff0,%eax
11df: eb 9a jmp 117b <__vxlan_sock_add+0x33b>
}
if (!vs)
vs = vxlan_socket_create(vxlan->net, ipv6,
vxlan->cfg.dst_port, vxlan->flags);
if (IS_ERR(vs))
return PTR_ERR(vs);
11e1: 8b 85 68 ff ff ff mov -0x98(%rbp),%eax
static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
{
int c, old;
c = atomic_read(v);
for (;;) {
if (unlikely(c == (u)))
11e7: 0f b6 55 cc movzbl -0x34(%rbp),%edx
break;
old = atomic_cmpxchg((v), c, c + (a));
11eb: 4c 89 f7 mov %r14,%rdi
return xadd(&v->counter, -i);
}
static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
{
return cmpxchg(&v->counter, old, new);
11ee: c6 45 a4 0a movb $0xa,-0x5c(%rbp)
c = atomic_read(v);
for (;;) {
if (unlikely(c == (u)))
break;
old = atomic_cmpxchg((v), c, c + (a));
if (likely(old == c))
11f2: 66 44 89 65 c8 mov %r12w,-0x38(%rbp)
11f7: c1 e8 08 shr $0x8,%eax
11fa: 83 f0 01 xor $0x1,%eax
static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
{
int c, old;
c = atomic_read(v);
for (;;) {
if (unlikely(c == (u)))
11fd: 83 e2 fb and $0xfffffffb,%edx
1200: 83 e0 01 and $0x1,%eax
1203: c1 e0 02 shl $0x2,%eax
1206: 09 d0 or %edx,%eax
1208: 83 c8 08 or $0x8,%eax
spin_lock(&vn->sock_lock);
vs = vxlan_find_sock(vxlan->net, ipv6 ? AF_INET6 : AF_INET,
vxlan->cfg.dst_port, vxlan->flags);
if (vs && !atomic_add_unless(&vs->refcnt, 1, 0)) {
spin_unlock(&vn->sock_lock);
return -EBUSY;
120b: 88 45 cc mov %al,-0x34(%rbp)
120e: 48 8d 85 70 ff ff ff lea -0x90(%rbp),%rax
memset(&udp_conf, 0, sizeof(udp_conf));
if (ipv6) {
udp_conf.family = AF_INET6;
udp_conf.use_udp6_rx_checksums =
1215: 48 89 c2 mov %rax,%rdx
1218: 48 89 85 60 ff ff ff mov %rax,-0xa0(%rbp)
int err;
memset(&udp_conf, 0, sizeof(udp_conf));
if (ipv6) {
udp_conf.family = AF_INET6;
121f: e8 00 00 00 00 callq 1224 <__vxlan_sock_add+0x3e4>
udp_conf.ipv6_v6only = 1;
} else {
udp_conf.family = AF_INET;
}
udp_conf.local_udp_port = port;
1224: e9 1e fd ff ff jmpq f47 <__vxlan_sock_add+0x107>
memset(&udp_conf, 0, sizeof(udp_conf));
if (ipv6) {
udp_conf.family = AF_INET6;
udp_conf.use_udp6_rx_checksums =
1229: 44 89 e6 mov %r12d,%esi
122c: 4c 89 da mov %r11,%rdx
122f: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
1236: 66 c1 c6 08 rol $0x8,%si
!(flags & VXLAN_F_UDP_ZERO_CSUM6_RX);
udp_conf.ipv6_v6only = 1;
123a: 4c 89 9d 68 ff ff ff mov %r11,-0x98(%rbp)
if (cfg->family == AF_INET6)
return udp_sock_create6(net, cfg, sockp);
1241: 0f b7 f6 movzwl %si,%esi
1244: e8 00 00 00 00 callq 1249 <__vxlan_sock_add+0x409>
1249: 4c 89 ff mov %r15,%rdi
124c: e8 00 00 00 00 callq 1251 <__vxlan_sock_add+0x411>
1251: 4c 8b 9d 68 ff ff ff mov -0x98(%rbp),%r11
1258: 4d 89 df mov %r11,%r15
for (h = 0; h < VNI_HASH_SIZE; ++h)
INIT_HLIST_HEAD(&vs->vni_list[h]);
sock = vxlan_create_sock(net, ipv6, port, flags);
if (IS_ERR(sock)) {
pr_info("Cannot bind port %d, err=%ld\n", ntohs(port),
125b: e9 98 fe ff ff jmpq 10f8 <__vxlan_sock_add+0x2b8>
1260: e8 00 00 00 00 callq 1265 <__vxlan_sock_add+0x425>
1265: 4c 63 d8 movslq %eax,%r11
1268: e9 e9 fc ff ff jmpq f56 <__vxlan_sock_add+0x116>
126d: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # 1273 <__vxlan_sock_add+0x433>
1273: 4c 8b 73 38 mov 0x38(%rbx),%r14
1277: 83 e8 01 sub $0x1,%eax
PTR_ERR(sock));
kfree(vs);
127a: e9 10 fc ff ff jmpq e8f <__vxlan_sock_add+0x4f>
127f: 90 nop
0000000000001280 <vxlan_open>:
1280: e8 00 00 00 00 callq 1285 <vxlan_open+0x5>
1285: 55 push %rbp
1286: 48 89 e5 mov %rsp,%rbp
1289: 41 57 push %r15
128b: 41 56 push %r14
128d: 41 55 push %r13
128f: 41 54 push %r12
else
#endif
vxlan->vn4_sock = vs;
vxlan_vs_add_dev(vs, vxlan);
return 0;
}
1291: 4c 8d bf 40 08 00 00 lea 0x840(%rdi),%r15
1298: 53 push %rbx
1299: 48 89 fb mov %rdi,%rbx
129c: 48 83 ec 18 sub $0x18,%rsp
12a0: 48 c7 87 60 08 00 00 movq $0x0,0x860(%rdi)
12a7: 00 00 00 00
12ab: 48 c7 87 68 08 00 00 movq $0x0,0x868(%rdi)
12b2: 00 00 00 00
free_percpu(dev->tstats);
}
/* Start ageing timer and join group when device is brought up */
static int vxlan_open(struct net_device *dev)
{
12b6: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
12bd: 00 00
12bf: 48 89 45 d0 mov %rax,-0x30(%rbp)
*
* Get network device private data
*/
static inline void *netdev_priv(const struct net_device *dev)
{
return (char *)dev + ALIGN(sizeof(struct net_device), NETDEV_ALIGN);
12c3: 31 c0 xor %eax,%eax
12c5: 8b 87 d8 08 00 00 mov 0x8d8(%rdi),%eax
12cb: 41 89 c5 mov %eax,%r13d
12ce: 41 c1 ed 0d shr $0xd,%r13d
{
bool ipv6 = vxlan->flags & VXLAN_F_IPV6;
bool metadata = vxlan->flags & VXLAN_F_COLLECT_METADATA;
int ret = 0;
vxlan->vn4_sock = NULL;
12d2: 41 83 e5 01 and $0x1,%r13d
12d6: 83 e0 20 and $0x20,%eax
12d9: 41 89 c4 mov %eax,%r12d
#if IS_ENABLED(CONFIG_IPV6)
vxlan->vn6_sock = NULL;
12dc: 0f 85 94 00 00 00 jne 1376 <vxlan_open+0xf6>
12e2: 45 84 ed test %r13b,%r13b
12e5: 0f 85 8b 00 00 00 jne 1376 <vxlan_open+0xf6>
free_percpu(dev->tstats);
}
/* Start ageing timer and join group when device is brought up */
static int vxlan_open(struct net_device *dev)
{
12eb: 45 85 e4 test %r12d,%r12d
12ee: 0f 84 3f 01 00 00 je 1433 <vxlan_open+0x1b3>
12f4: 45 84 ed test %r13b,%r13b
return 0;
}
static int vxlan_sock_add(struct vxlan_dev *vxlan)
{
bool ipv6 = vxlan->flags & VXLAN_F_IPV6;
12f7: 0f 85 36 01 00 00 jne 1433 <vxlan_open+0x1b3>
bool metadata = vxlan->flags & VXLAN_F_COLLECT_METADATA;
12fd: 45 31 f6 xor %r14d,%r14d
1300: 0f b7 83 80 08 00 00 movzwl 0x880(%rbx),%eax
int ret = 0;
vxlan->vn4_sock = NULL;
#if IS_ENABLED(CONFIG_IPV6)
vxlan->vn6_sock = NULL;
if (ipv6 || metadata)
1307: 66 83 f8 0a cmp $0xa,%ax
130b: 0f 84 9b 00 00 00 je 13ac <vxlan_open+0x12c>
1311: 8b 93 84 08 00 00 mov 0x884(%rbx),%edx
1317: 89 d1 mov %edx,%ecx
1319: 81 e1 f0 00 00 00 and $0xf0,%ecx
ret = __vxlan_sock_add(vxlan, true);
#endif
if (!ret && (!ipv6 || metadata))
131f: 81 f9 e0 00 00 00 cmp $0xe0,%ecx
1325: 0f 84 9c 00 00 00 je 13c7 <vxlan_open+0x147>
132b: 48 83 bb 90 09 00 00 cmpq $0x0,0x990(%rbx)
1332: 00
return ipa->sin.sin_addr.s_addr == htonl(INADDR_ANY);
}
static inline bool vxlan_addr_multicast(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
1333: 75 25 jne 135a <vxlan_open+0xda>
1335: 48 8b 55 d0 mov -0x30(%rbp),%rdx
1339: 65 48 33 14 25 28 00 xor %gs:0x28,%rdx
1340: 00 00
return ipv6_addr_is_multicast(&ipa->sin6.sin6_addr);
else
return IN_MULTICAST(ntohl(ipa->sin.sin_addr.s_addr));
1342: 44 89 f0 mov %r14d,%eax
1345: 0f 85 3b 01 00 00 jne 1486 <vxlan_open+0x206>
ret = vxlan_sock_add(vxlan);
if (ret < 0)
return ret;
if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip)) {
134b: 48 83 c4 18 add $0x18,%rsp
134f: 5b pop %rbx
1350: 41 5c pop %r12
1352: 41 5d pop %r13
1354: 41 5e pop %r14
1356: 41 5f pop %r15
1358: 5d pop %rbp
1359: c3 retq
135a: 48 8b 35 00 00 00 00 mov 0x0(%rip),%rsi # 1361 <vxlan_open+0xe1>
vxlan_sock_release(vxlan);
return ret;
}
}
if (vxlan->cfg.age_interval)
1361: 48 8d bb e0 08 00 00 lea 0x8e0(%rbx),%rdi
mod_timer(&vxlan->age_timer, jiffies + FDB_AGE_INTERVAL);
return ret;
}
1368: 48 81 c6 c4 09 00 00 add $0x9c4,%rsi
136f: e8 00 00 00 00 callq 1374 <vxlan_open+0xf4>
1374: eb bf jmp 1335 <vxlan_open+0xb5>
1376: be 01 00 00 00 mov $0x1,%esi
137b: 4c 89 ff mov %r15,%rdi
137e: e8 bd fa ff ff callq e40 <__vxlan_sock_add>
1383: 85 c0 test %eax,%eax
1385: 41 89 c6 mov %eax,%r14d
1388: 0f 84 5d ff ff ff je 12eb <vxlan_open+0x6b>
return ret;
}
}
if (vxlan->cfg.age_interval)
mod_timer(&vxlan->age_timer, jiffies + FDB_AGE_INTERVAL);
138e: 45 85 f6 test %r14d,%r14d
1391: 0f 89 69 ff ff ff jns 1300 <vxlan_open+0x80>
1397: 48 8d b3 68 08 00 00 lea 0x868(%rbx),%rsi
139e: 48 8d bb 60 08 00 00 lea 0x860(%rbx),%rdi
13a5: e8 e6 f8 ff ff callq c90 <vxlan_sock_release.isra.44>
vxlan->vn4_sock = NULL;
#if IS_ENABLED(CONFIG_IPV6)
vxlan->vn6_sock = NULL;
if (ipv6 || metadata)
ret = __vxlan_sock_add(vxlan, true);
13aa: eb 89 jmp 1335 <vxlan_open+0xb5>
13ac: 0f b6 83 88 08 00 00 movzbl 0x888(%rbx),%eax
#endif
if (!ret && (!ipv6 || metadata))
13b3: 3d ff 00 00 00 cmp $0xff,%eax
13b8: 0f 85 6d ff ff ff jne 132b <vxlan_open+0xab>
ret = __vxlan_sock_add(vxlan, false);
if (ret < 0)
13be: 44 8b ab a4 08 00 00 mov 0x8a4(%rbx),%r13d
13c5: eb 0d jmp 13d4 <vxlan_open+0x154>
13c7: 66 83 f8 02 cmp $0x2,%ax
13cb: 44 8b ab a4 08 00 00 mov 0x8a4(%rbx),%r13d
13d2: 74 71 je 1445 <vxlan_open+0x1c5>
13d4: 48 8b 83 68 08 00 00 mov 0x868(%rbx),%rax
ret = vxlan_igmp_join(vxlan);
if (ret == -EADDRINUSE)
ret = 0;
if (ret) {
vxlan_sock_release(vxlan);
return ret;
13db: 31 f6 xor %esi,%esi
ret = vxlan_sock_add(vxlan);
if (ret < 0)
return ret;
if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip)) {
13dd: 48 8b 40 10 mov 0x10(%rax),%rax
13e1: 4c 8b 60 20 mov 0x20(%rax),%r12
13e5: 4c 89 e7 mov %r12,%rdi
13e8: e8 00 00 00 00 callq 13ed <vxlan_open+0x16d>
13ed: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 13f4 <vxlan_open+0x174>
*/
static int vxlan_igmp_join(struct vxlan_dev *vxlan)
{
struct sock *sk;
union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
int ifindex = vxlan->default_dst.remote_ifindex;
13f4: 4c 89 e7 mov %r12,%rdi
int ret = -EINVAL;
if (ip->sa.sa_family == AF_INET) {
13f7: 48 8d 93 88 08 00 00 lea 0x888(%rbx),%rdx
*/
static int vxlan_igmp_join(struct vxlan_dev *vxlan)
{
struct sock *sk;
union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
int ifindex = vxlan->default_dst.remote_ifindex;
13fe: 44 89 ee mov %r13d,%esi
1401: ff 10 callq *(%rax)
int ret = -EINVAL;
if (ip->sa.sa_family == AF_INET) {
1403: 4c 89 e7 mov %r12,%rdi
lock_sock(sk);
ret = ip_mc_join_group(sk, &mreq);
release_sock(sk);
#if IS_ENABLED(CONFIG_IPV6)
} else {
sk = vxlan->vn6_sock->sock->sk;
1406: 41 89 c6 mov %eax,%r14d
1409: e8 00 00 00 00 callq 140e <vxlan_open+0x18e>
140e: 41 83 fe 9e cmp $0xffffff9e,%r14d
1412: 74 09 je 141d <vxlan_open+0x19d>
1414: 45 85 f6 test %r14d,%r14d
void lock_sock_nested(struct sock *sk, int subclass);
static inline void lock_sock(struct sock *sk)
{
lock_sock_nested(sk, 0);
1417: 0f 85 7a ff ff ff jne 1397 <vxlan_open+0x117>
lock_sock(sk);
ret = ipv6_stub->ipv6_sock_mc_join(sk, ifindex,
141d: 45 31 f6 xor %r14d,%r14d
1420: 48 83 bb 90 09 00 00 cmpq $0x0,0x990(%rbx)
1427: 00
&ip->sin6.sin6_addr);
1428: 0f 84 07 ff ff ff je 1335 <vxlan_open+0xb5>
release_sock(sk);
#if IS_ENABLED(CONFIG_IPV6)
} else {
sk = vxlan->vn6_sock->sock->sk;
lock_sock(sk);
ret = ipv6_stub->ipv6_sock_mc_join(sk, ifindex,
142e: e9 27 ff ff ff jmpq 135a <vxlan_open+0xda>
&ip->sin6.sin6_addr);
release_sock(sk);
1433: 31 f6 xor %esi,%esi
1435: 4c 89 ff mov %r15,%rdi
release_sock(sk);
#if IS_ENABLED(CONFIG_IPV6)
} else {
sk = vxlan->vn6_sock->sock->sk;
lock_sock(sk);
ret = ipv6_stub->ipv6_sock_mc_join(sk, ifindex,
1438: e8 03 fa ff ff callq e40 <__vxlan_sock_add>
&ip->sin6.sin6_addr);
release_sock(sk);
143d: 41 89 c6 mov %eax,%r14d
if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip)) {
ret = vxlan_igmp_join(vxlan);
if (ret == -EADDRINUSE)
ret = 0;
if (ret) {
1440: e9 49 ff ff ff jmpq 138e <vxlan_open+0x10e>
1445: 48 8b 83 60 08 00 00 mov 0x860(%rbx),%rax
144c: 31 f6 xor %esi,%esi
144e: 48 c7 45 c4 00 00 00 movq $0x0,-0x3c(%rbp)
1455: 00
vxlan_sock_release(vxlan);
return ret;
}
}
if (vxlan->cfg.age_interval)
1456: 44 89 6d cc mov %r13d,-0x34(%rbp)
145a: 89 55 c4 mov %edx,-0x3c(%rbp)
145d: 48 8b 40 10 mov 0x10(%rax),%rax
1461: 4c 8b 60 20 mov 0x20(%rax),%r12
vxlan->vn6_sock = NULL;
if (ipv6 || metadata)
ret = __vxlan_sock_add(vxlan, true);
#endif
if (!ret && (!ipv6 || metadata))
ret = __vxlan_sock_add(vxlan, false);
1465: 4c 89 e7 mov %r12,%rdi
1468: e8 00 00 00 00 callq 146d <vxlan_open+0x1ed>
146d: 48 8d 75 c4 lea -0x3c(%rbp),%rsi
1471: 4c 89 e7 mov %r12,%rdi
1474: e8 00 00 00 00 callq 1479 <vxlan_open+0x1f9>
struct ip_mreqn mreq = {
.imr_multiaddr.s_addr = ip->sin.sin_addr.s_addr,
.imr_ifindex = ifindex,
};
sk = vxlan->vn4_sock->sock->sk;
1479: 4c 89 e7 mov %r12,%rdi
147c: 41 89 c6 mov %eax,%r14d
union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
int ifindex = vxlan->default_dst.remote_ifindex;
int ret = -EINVAL;
if (ip->sa.sa_family == AF_INET) {
struct ip_mreqn mreq = {
147f: e8 00 00 00 00 callq 1484 <vxlan_open+0x204>
1484: eb 88 jmp 140e <vxlan_open+0x18e>
1486: e8 00 00 00 00 callq 148b <vxlan_open+0x20b>
148b: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
0000000000001490 <vxlan_fill_info>:
.imr_multiaddr.s_addr = ip->sin.sin_addr.s_addr,
.imr_ifindex = ifindex,
};
sk = vxlan->vn4_sock->sock->sk;
1490: e8 00 00 00 00 callq 1495 <vxlan_fill_info+0x5>
1495: 55 push %rbp
1496: ba 04 00 00 00 mov $0x4,%edx
149b: 48 89 e5 mov %rsp,%rbp
lock_sock(sk);
ret = ip_mc_join_group(sk, &mreq);
149e: 41 55 push %r13
14a0: 41 54 push %r12
14a2: 53 push %rbx
14a3: 48 8d 4d c0 lea -0x40(%rbp),%rcx
14a7: 48 89 f3 mov %rsi,%rbx
release_sock(sk);
14aa: 49 89 fc mov %rdi,%r12
.imr_ifindex = ifindex,
};
sk = vxlan->vn4_sock->sock->sk;
lock_sock(sk);
ret = ip_mc_join_group(sk, &mreq);
14ad: 48 83 ec 38 sub $0x38,%rsp
release_sock(sk);
14b1: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
14b8: 00 00
if (vxlan->cfg.age_interval)
mod_timer(&vxlan->age_timer, jiffies + FDB_AGE_INTERVAL);
return ret;
}
14ba: 48 89 45 e0 mov %rax,-0x20(%rbp)
14be: 31 c0 xor %eax,%eax
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_RX */
0;
}
static int vxlan_fill_info(struct sk_buff *skb, const struct net_device *dev)
{
14c0: 0f b7 86 7e 09 00 00 movzwl 0x97e(%rsi),%eax
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_u32(struct sk_buff *skb, int attrtype, u32 value)
{
return nla_put(skb, attrtype, sizeof(u32), &value);
14c7: 66 c1 c0 08 rol $0x8,%ax
14cb: 66 89 45 dc mov %ax,-0x24(%rbp)
14cf: 0f b7 86 80 09 00 00 movzwl 0x980(%rsi),%eax
14d6: 66 c1 c0 08 rol $0x8,%ax
14da: 66 89 45 de mov %ax,-0x22(%rbp)
14de: 8b 86 a0 08 00 00 mov 0x8a0(%rsi),%eax
14e4: be 01 00 00 00 mov $0x1,%esi
14e9: 0f c8 bswap %eax
14eb: 89 45 c0 mov %eax,-0x40(%rbp)
14ee: e8 00 00 00 00 callq 14f3 <vxlan_fill_info+0x63>
const struct vxlan_dev *vxlan = netdev_priv(dev);
const struct vxlan_rdst *dst = &vxlan->default_dst;
struct ifla_vxlan_port_range ports = {
.low = htons(vxlan->cfg.port_min),
14f3: 85 c0 test %eax,%eax
14f5: 0f 85 9b 03 00 00 jne 1896 <vxlan_fill_info+0x406>
static int vxlan_fill_info(struct sk_buff *skb, const struct net_device *dev)
{
const struct vxlan_dev *vxlan = netdev_priv(dev);
const struct vxlan_rdst *dst = &vxlan->default_dst;
struct ifla_vxlan_port_range ports = {
14fb: 0f b7 83 80 08 00 00 movzwl 0x880(%rbx),%eax
.low = htons(vxlan->cfg.port_min),
.high = htons(vxlan->cfg.port_max),
1502: 66 83 f8 0a cmp $0xa,%ax
1506: 0f 84 2d 04 00 00 je 1939 <vxlan_fill_info+0x4a9>
static int vxlan_fill_info(struct sk_buff *skb, const struct net_device *dev)
{
const struct vxlan_dev *vxlan = netdev_priv(dev);
const struct vxlan_rdst *dst = &vxlan->default_dst;
struct ifla_vxlan_port_range ports = {
150c: 8b 93 84 08 00 00 mov 0x884(%rbx),%edx
.low = htons(vxlan->cfg.port_min),
.high = htons(vxlan->cfg.port_max),
};
if (nla_put_u32(skb, IFLA_VXLAN_ID, be32_to_cpu(dst->remote_vni)))
1512: 85 d2 test %edx,%edx
1514: 0f 85 a3 03 00 00 jne 18bd <vxlan_fill_info+0x42d>
151a: 8b 83 a4 08 00 00 mov 0x8a4(%rbx),%eax
1520: 85 c0 test %eax,%eax
1522: 0f 85 ca 03 00 00 jne 18f2 <vxlan_fill_info+0x462>
1528: 0f b7 83 54 09 00 00 movzwl 0x954(%rbx),%eax
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
static inline bool vxlan_addr_any(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
152f: 66 83 f8 0a cmp $0xa,%ax
1533: 0f 84 e7 03 00 00 je 1920 <vxlan_fill_info+0x490>
1539: 8b 93 58 09 00 00 mov 0x958(%rbx),%edx
return ipv6_addr_any(&ipa->sin6.sin6_addr);
else
return ipa->sin.sin_addr.s_addr == htonl(INADDR_ANY);
153f: 85 d2 test %edx,%edx
1541: 74 2b je 156e <vxlan_fill_info+0xde>
};
if (nla_put_u32(skb, IFLA_VXLAN_ID, be32_to_cpu(dst->remote_vni)))
goto nla_put_failure;
if (!vxlan_addr_any(&dst->remote_ip)) {
1543: 66 83 f8 02 cmp $0x2,%ax
1547: 0f 84 05 04 00 00 je 1952 <vxlan_fill_info+0x4c2>
goto nla_put_failure;
#endif
}
}
if (dst->remote_ifindex && nla_put_u32(skb, IFLA_VXLAN_LINK, dst->remote_ifindex))
154d: 48 8d 8b 5c 09 00 00 lea 0x95c(%rbx),%rcx
1554: ba 10 00 00 00 mov $0x10,%edx
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
static inline bool vxlan_addr_any(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
1559: be 11 00 00 00 mov $0x11,%esi
155e: 4c 89 e7 mov %r12,%rdi
1561: e8 00 00 00 00 callq 1566 <vxlan_fill_info+0xd6>
1566: 85 c0 test %eax,%eax
1568: 0f 85 28 03 00 00 jne 1896 <vxlan_fill_info+0x406>
return ipv6_addr_any(&ipa->sin6.sin6_addr);
else
return ipa->sin.sin_addr.s_addr == htonl(INADDR_ANY);
156e: 0f b6 83 83 09 00 00 movzbl 0x983(%rbx),%eax
if (dst->remote_ifindex && nla_put_u32(skb, IFLA_VXLAN_LINK, dst->remote_ifindex))
goto nla_put_failure;
if (!vxlan_addr_any(&vxlan->cfg.saddr)) {
if (vxlan->cfg.saddr.sa.sa_family == AF_INET) {
1575: 48 8d 4d b1 lea -0x4f(%rbp),%rcx
1579: ba 01 00 00 00 mov $0x1,%edx
if (nla_put_in_addr(skb, IFLA_VXLAN_LOCAL,
vxlan->cfg.saddr.sin.sin_addr.s_addr))
goto nla_put_failure;
#if IS_ENABLED(CONFIG_IPV6)
} else {
if (nla_put_in6_addr(skb, IFLA_VXLAN_LOCAL6,
157e: be 05 00 00 00 mov $0x5,%esi
1583: 4c 89 e7 mov %r12,%rdi
* @addr: IPv6 address
*/
static inline int nla_put_in6_addr(struct sk_buff *skb, int attrtype,
const struct in6_addr *addr)
{
return nla_put(skb, attrtype, sizeof(*addr), addr);
1586: 88 45 b1 mov %al,-0x4f(%rbp)
1589: e8 00 00 00 00 callq 158e <vxlan_fill_info+0xfe>
158e: 85 c0 test %eax,%eax
1590: 0f 85 00 03 00 00 jne 1896 <vxlan_fill_info+0x406>
1596: 0f b6 83 82 09 00 00 movzbl 0x982(%rbx),%eax
159d: 48 8d 4d b2 lea -0x4e(%rbp),%rcx
15a1: ba 01 00 00 00 mov $0x1,%edx
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_u8(struct sk_buff *skb, int attrtype, u8 value)
{
return nla_put(skb, attrtype, sizeof(u8), &value);
15a6: be 06 00 00 00 mov $0x6,%esi
15ab: 4c 89 e7 mov %r12,%rdi
15ae: 88 45 b2 mov %al,-0x4e(%rbp)
15b1: e8 00 00 00 00 callq 15b6 <vxlan_fill_info+0x126>
15b6: 85 c0 test %eax,%eax
15b8: 0f 85 d8 02 00 00 jne 1896 <vxlan_fill_info+0x406>
goto nla_put_failure;
#endif
}
}
if (nla_put_u8(skb, IFLA_VXLAN_TTL, vxlan->cfg.ttl) ||
15be: 8b 83 84 09 00 00 mov 0x984(%rbx),%eax
15c4: 48 8d 4d d0 lea -0x30(%rbp),%rcx
15c8: ba 04 00 00 00 mov $0x4,%edx
15cd: be 1a 00 00 00 mov $0x1a,%esi
15d2: 4c 89 e7 mov %r12,%rdi
15d5: 89 45 d0 mov %eax,-0x30(%rbp)
15d8: e8 00 00 00 00 callq 15dd <vxlan_fill_info+0x14d>
15dd: 85 c0 test %eax,%eax
15df: 0f 85 b1 02 00 00 jne 1896 <vxlan_fill_info+0x406>
15e5: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
15eb: 48 8d 4d b3 lea -0x4d(%rbp),%rcx
15ef: ba 01 00 00 00 mov $0x1,%edx
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_be32(struct sk_buff *skb, int attrtype, __be32 value)
{
return nla_put(skb, attrtype, sizeof(__be32), &value);
15f4: be 07 00 00 00 mov $0x7,%esi
15f9: 4c 89 e7 mov %r12,%rdi
15fc: 88 45 b3 mov %al,-0x4d(%rbp)
15ff: 80 65 b3 01 andb $0x1,-0x4d(%rbp)
1603: e8 00 00 00 00 callq 1608 <vxlan_fill_info+0x178>
1608: 85 c0 test %eax,%eax
160a: 0f 85 86 02 00 00 jne 1896 <vxlan_fill_info+0x406>
nla_put_u8(skb, IFLA_VXLAN_TOS, vxlan->cfg.tos) ||
1610: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
1616: 48 8d 4d b4 lea -0x4c(%rbp),%rcx
161a: ba 01 00 00 00 mov $0x1,%edx
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_u8(struct sk_buff *skb, int attrtype, u8 value)
{
return nla_put(skb, attrtype, sizeof(u8), &value);
161f: be 0b 00 00 00 mov $0xb,%esi
1624: 4c 89 e7 mov %r12,%rdi
1627: d1 e8 shr %eax
1629: 83 e0 01 and $0x1,%eax
162c: 88 45 b4 mov %al,-0x4c(%rbp)
162f: e8 00 00 00 00 callq 1634 <vxlan_fill_info+0x1a4>
1634: 85 c0 test %eax,%eax
1636: 0f 85 5a 02 00 00 jne 1896 <vxlan_fill_info+0x406>
nla_put_be32(skb, IFLA_VXLAN_LABEL, vxlan->cfg.label) ||
163c: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
1642: 48 8d 4d b5 lea -0x4b(%rbp),%rcx
1646: ba 01 00 00 00 mov $0x1,%edx
164b: be 0c 00 00 00 mov $0xc,%esi
1650: 4c 89 e7 mov %r12,%rdi
1653: c1 e8 02 shr $0x2,%eax
1656: 83 e0 01 and $0x1,%eax
1659: 88 45 b5 mov %al,-0x4b(%rbp)
165c: e8 00 00 00 00 callq 1661 <vxlan_fill_info+0x1d1>
1661: 85 c0 test %eax,%eax
1663: 0f 85 2d 02 00 00 jne 1896 <vxlan_fill_info+0x406>
nla_put_u8(skb, IFLA_VXLAN_LEARNING,
!!(vxlan->flags & VXLAN_F_LEARN)) ||
1669: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
166f: 48 8d 4d b6 lea -0x4a(%rbp),%rcx
1673: ba 01 00 00 00 mov $0x1,%edx
1678: be 0d 00 00 00 mov $0xd,%esi
167d: 4c 89 e7 mov %r12,%rdi
1680: c1 e8 03 shr $0x3,%eax
1683: 83 e0 01 and $0x1,%eax
1686: 88 45 b6 mov %al,-0x4a(%rbp)
1689: e8 00 00 00 00 callq 168e <vxlan_fill_info+0x1fe>
168e: 85 c0 test %eax,%eax
1690: 0f 85 00 02 00 00 jne 1896 <vxlan_fill_info+0x406>
nla_put_u8(skb, IFLA_VXLAN_PROXY,
!!(vxlan->flags & VXLAN_F_PROXY)) ||
1696: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
169c: 48 8d 4d b7 lea -0x49(%rbp),%rcx
16a0: ba 01 00 00 00 mov $0x1,%edx
16a5: be 0e 00 00 00 mov $0xe,%esi
16aa: 4c 89 e7 mov %r12,%rdi
16ad: c1 e8 04 shr $0x4,%eax
16b0: 83 e0 01 and $0x1,%eax
16b3: 88 45 b7 mov %al,-0x49(%rbp)
16b6: e8 00 00 00 00 callq 16bb <vxlan_fill_info+0x22b>
16bb: 85 c0 test %eax,%eax
16bd: 0f 85 d3 01 00 00 jne 1896 <vxlan_fill_info+0x406>
nla_put_u8(skb, IFLA_VXLAN_RSC, !!(vxlan->flags & VXLAN_F_RSC)) ||
16c3: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
16c9: 48 8d 4d b8 lea -0x48(%rbp),%rcx
16cd: ba 01 00 00 00 mov $0x1,%edx
16d2: be 19 00 00 00 mov $0x19,%esi
16d7: 4c 89 e7 mov %r12,%rdi
16da: c1 e8 0d shr $0xd,%eax
16dd: 83 e0 01 and $0x1,%eax
16e0: 88 45 b8 mov %al,-0x48(%rbp)
16e3: e8 00 00 00 00 callq 16e8 <vxlan_fill_info+0x258>
16e8: 85 c0 test %eax,%eax
16ea: 0f 85 a6 01 00 00 jne 1896 <vxlan_fill_info+0x406>
nla_put_u8(skb, IFLA_VXLAN_L2MISS,
!!(vxlan->flags & VXLAN_F_L2MISS)) ||
16f0: 48 8b 83 90 09 00 00 mov 0x990(%rbx),%rax
16f7: 48 8d 4d d4 lea -0x2c(%rbp),%rcx
16fb: ba 04 00 00 00 mov $0x4,%edx
1700: be 08 00 00 00 mov $0x8,%esi
1705: 4c 89 e7 mov %r12,%rdi
1708: 89 45 d4 mov %eax,-0x2c(%rbp)
170b: e8 00 00 00 00 callq 1710 <vxlan_fill_info+0x280>
1710: 85 c0 test %eax,%eax
1712: 0f 85 7e 01 00 00 jne 1896 <vxlan_fill_info+0x406>
nla_put_u8(skb, IFLA_VXLAN_L3MISS,
!!(vxlan->flags & VXLAN_F_L3MISS)) ||
1718: 8b 83 98 09 00 00 mov 0x998(%rbx),%eax
171e: 48 8d 4d d8 lea -0x28(%rbp),%rcx
1722: ba 04 00 00 00 mov $0x4,%edx
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_u32(struct sk_buff *skb, int attrtype, u32 value)
{
return nla_put(skb, attrtype, sizeof(u32), &value);
1727: be 09 00 00 00 mov $0x9,%esi
172c: 4c 89 e7 mov %r12,%rdi
172f: 89 45 d8 mov %eax,-0x28(%rbp)
1732: e8 00 00 00 00 callq 1737 <vxlan_fill_info+0x2a7>
1737: 85 c0 test %eax,%eax
1739: 0f 85 57 01 00 00 jne 1896 <vxlan_fill_info+0x406>
173f: 0f b7 83 7c 09 00 00 movzwl 0x97c(%rbx),%eax
nla_put_u8(skb, IFLA_VXLAN_COLLECT_METADATA,
!!(vxlan->flags & VXLAN_F_COLLECT_METADATA)) ||
1746: 48 8d 4d be lea -0x42(%rbp),%rcx
174a: ba 02 00 00 00 mov $0x2,%edx
174f: be 0f 00 00 00 mov $0xf,%esi
1754: 4c 89 e7 mov %r12,%rdi
1757: 66 89 45 be mov %ax,-0x42(%rbp)
175b: e8 00 00 00 00 callq 1760 <vxlan_fill_info+0x2d0>
1760: 85 c0 test %eax,%eax
1762: 0f 85 2e 01 00 00 jne 1896 <vxlan_fill_info+0x406>
nla_put_u32(skb, IFLA_VXLAN_AGEING, vxlan->cfg.age_interval) ||
1768: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
176e: 48 8d 4d b9 lea -0x47(%rbp),%rcx
1772: ba 01 00 00 00 mov $0x1,%edx
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_be16(struct sk_buff *skb, int attrtype, __be16 value)
{
return nla_put(skb, attrtype, sizeof(__be16), &value);
1777: be 12 00 00 00 mov $0x12,%esi
177c: 4c 89 e7 mov %r12,%rdi
177f: c1 e8 06 shr $0x6,%eax
1782: 83 f0 01 xor $0x1,%eax
1785: 83 e0 01 and $0x1,%eax
1788: 88 45 b9 mov %al,-0x47(%rbp)
178b: e8 00 00 00 00 callq 1790 <vxlan_fill_info+0x300>
nla_put_u32(skb, IFLA_VXLAN_LIMIT, vxlan->cfg.addrmax) ||
1790: 85 c0 test %eax,%eax
1792: 0f 85 fe 00 00 00 jne 1896 <vxlan_fill_info+0x406>
1798: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_u8(struct sk_buff *skb, int attrtype, u8 value)
{
return nla_put(skb, attrtype, sizeof(u8), &value);
179e: 48 8d 4d ba lea -0x46(%rbp),%rcx
17a2: ba 01 00 00 00 mov $0x1,%edx
17a7: be 13 00 00 00 mov $0x13,%esi
17ac: 4c 89 e7 mov %r12,%rdi
17af: c1 e8 07 shr $0x7,%eax
17b2: 83 e0 01 and $0x1,%eax
17b5: 88 45 ba mov %al,-0x46(%rbp)
17b8: e8 00 00 00 00 callq 17bd <vxlan_fill_info+0x32d>
17bd: 85 c0 test %eax,%eax
17bf: 0f 85 d1 00 00 00 jne 1896 <vxlan_fill_info+0x406>
nla_put_be16(skb, IFLA_VXLAN_PORT, vxlan->cfg.dst_port) ||
17c5: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
17cb: 48 8d 4d bb lea -0x45(%rbp),%rcx
17cf: ba 01 00 00 00 mov $0x1,%edx
17d4: be 14 00 00 00 mov $0x14,%esi
17d9: 4c 89 e7 mov %r12,%rdi
17dc: c1 e8 08 shr $0x8,%eax
17df: 83 e0 01 and $0x1,%eax
17e2: 88 45 bb mov %al,-0x45(%rbp)
17e5: e8 00 00 00 00 callq 17ea <vxlan_fill_info+0x35a>
17ea: 85 c0 test %eax,%eax
17ec: 0f 85 a4 00 00 00 jne 1896 <vxlan_fill_info+0x406>
nla_put_u8(skb, IFLA_VXLAN_UDP_CSUM,
!(vxlan->flags & VXLAN_F_UDP_ZERO_CSUM_TX)) ||
17f2: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
17f8: 48 8d 4d bc lea -0x44(%rbp),%rcx
17fc: ba 01 00 00 00 mov $0x1,%edx
1801: be 15 00 00 00 mov $0x15,%esi
1806: 4c 89 e7 mov %r12,%rdi
1809: c1 e8 09 shr $0x9,%eax
180c: 83 e0 01 and $0x1,%eax
180f: 88 45 bc mov %al,-0x44(%rbp)
1812: e8 00 00 00 00 callq 1817 <vxlan_fill_info+0x387>
1817: 85 c0 test %eax,%eax
1819: 75 7b jne 1896 <vxlan_fill_info+0x406>
nla_put_u8(skb, IFLA_VXLAN_UDP_ZERO_CSUM6_TX,
!!(vxlan->flags & VXLAN_F_UDP_ZERO_CSUM6_TX)) ||
181b: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
1821: 48 8d 4d bd lea -0x43(%rbp),%rcx
1825: ba 01 00 00 00 mov $0x1,%edx
182a: be 16 00 00 00 mov $0x16,%esi
182f: 4c 89 e7 mov %r12,%rdi
1832: c1 e8 0a shr $0xa,%eax
1835: 83 e0 01 and $0x1,%eax
1838: 88 45 bd mov %al,-0x43(%rbp)
183b: e8 00 00 00 00 callq 1840 <vxlan_fill_info+0x3b0>
1840: 85 c0 test %eax,%eax
1842: 75 52 jne 1896 <vxlan_fill_info+0x406>
1844: 48 8d 4d dc lea -0x24(%rbp),%rcx
nla_put_u8(skb, IFLA_VXLAN_UDP_ZERO_CSUM6_RX,
!!(vxlan->flags & VXLAN_F_UDP_ZERO_CSUM6_RX)) ||
1848: ba 04 00 00 00 mov $0x4,%edx
184d: be 0a 00 00 00 mov $0xa,%esi
1852: 4c 89 e7 mov %r12,%rdi
1855: e8 00 00 00 00 callq 185a <vxlan_fill_info+0x3ca>
185a: 85 c0 test %eax,%eax
185c: 41 89 c5 mov %eax,%r13d
185f: 75 35 jne 1896 <vxlan_fill_info+0x406>
1861: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
1867: f6 c4 08 test $0x8,%ah
186a: 0f 85 2e 01 00 00 jne 199e <vxlan_fill_info+0x50e>
nla_put_u8(skb, IFLA_VXLAN_REMCSUM_TX,
!!(vxlan->flags & VXLAN_F_REMCSUM_TX)) ||
1870: f6 c4 40 test $0x40,%ah
1873: 0f 85 49 01 00 00 jne 19c2 <vxlan_fill_info+0x532>
nla_put_u8(skb, IFLA_VXLAN_REMCSUM_RX,
!!(vxlan->flags & VXLAN_F_REMCSUM_RX)))
goto nla_put_failure;
if (nla_put(skb, IFLA_VXLAN_PORT_RANGE, sizeof(ports), &ports))
1879: f6 c4 10 test $0x10,%ah
187c: 74 1e je 189c <vxlan_fill_info+0x40c>
187e: 31 c9 xor %ecx,%ecx
1880: 31 d2 xor %edx,%edx
1882: be 18 00 00 00 mov $0x18,%esi
1887: 4c 89 e7 mov %r12,%rdi
188a: e8 00 00 00 00 callq 188f <vxlan_fill_info+0x3ff>
188f: 85 c0 test %eax,%eax
goto nla_put_failure;
if (vxlan->flags & VXLAN_F_GBP &&
1891: 41 89 c5 mov %eax,%r13d
1894: 74 06 je 189c <vxlan_fill_info+0x40c>
1896: 41 bd a6 ff ff ff mov $0xffffffa6,%r13d
189c: 48 8b 75 e0 mov -0x20(%rbp),%rsi
nla_put_flag(skb, IFLA_VXLAN_GBP))
goto nla_put_failure;
if (vxlan->flags & VXLAN_F_GPE &&
18a0: 65 48 33 34 25 28 00 xor %gs:0x28,%rsi
18a7: 00 00
nla_put_flag(skb, IFLA_VXLAN_GPE))
goto nla_put_failure;
if (vxlan->flags & VXLAN_F_REMCSUM_NOPARTIAL &&
18a9: 44 89 e8 mov %r13d,%eax
18ac: 0f 85 34 01 00 00 jne 19e6 <vxlan_fill_info+0x556>
* @skb: socket buffer to add attribute to
* @attrtype: attribute type
*/
static inline int nla_put_flag(struct sk_buff *skb, int attrtype)
{
return nla_put(skb, attrtype, 0, NULL);
18b2: 48 83 c4 38 add $0x38,%rsp
18b6: 5b pop %rbx
18b7: 41 5c pop %r12
18b9: 41 5d pop %r13
18bb: 5d pop %rbp
18bc: c3 retq
18bd: 66 83 f8 02 cmp $0x2,%ax
18c1: 0f 84 b1 00 00 00 je 1978 <vxlan_fill_info+0x4e8>
goto nla_put_failure;
return 0;
nla_put_failure:
return -EMSGSIZE;
18c7: 48 8d 8b 88 08 00 00 lea 0x888(%rbx),%rcx
}
18ce: ba 10 00 00 00 mov $0x10,%edx
18d3: be 10 00 00 00 mov $0x10,%esi
18d8: 4c 89 e7 mov %r12,%rdi
18db: e8 00 00 00 00 callq 18e0 <vxlan_fill_info+0x450>
18e0: 85 c0 test %eax,%eax
18e2: 75 b2 jne 1896 <vxlan_fill_info+0x406>
18e4: 8b 83 a4 08 00 00 mov 0x8a4(%rbx),%eax
18ea: 85 c0 test %eax,%eax
18ec: 0f 84 36 fc ff ff je 1528 <vxlan_fill_info+0x98>
if (nla_put_u32(skb, IFLA_VXLAN_ID, be32_to_cpu(dst->remote_vni)))
goto nla_put_failure;
if (!vxlan_addr_any(&dst->remote_ip)) {
if (dst->remote_ip.sa.sa_family == AF_INET) {
18f2: 48 8d 4d c8 lea -0x38(%rbp),%rcx
18f6: ba 04 00 00 00 mov $0x4,%edx
if (nla_put_in_addr(skb, IFLA_VXLAN_GROUP,
dst->remote_ip.sin.sin_addr.s_addr))
goto nla_put_failure;
#if IS_ENABLED(CONFIG_IPV6)
} else {
if (nla_put_in6_addr(skb, IFLA_VXLAN_GROUP6,
18fb: be 03 00 00 00 mov $0x3,%esi
* @addr: IPv6 address
*/
static inline int nla_put_in6_addr(struct sk_buff *skb, int attrtype,
const struct in6_addr *addr)
{
return nla_put(skb, attrtype, sizeof(*addr), addr);
1900: 4c 89 e7 mov %r12,%rdi
1903: 89 45 c8 mov %eax,-0x38(%rbp)
1906: e8 00 00 00 00 callq 190b <vxlan_fill_info+0x47b>
190b: 85 c0 test %eax,%eax
190d: 75 87 jne 1896 <vxlan_fill_info+0x406>
190f: 0f b7 83 54 09 00 00 movzwl 0x954(%rbx),%eax
goto nla_put_failure;
#endif
}
}
if (dst->remote_ifindex && nla_put_u32(skb, IFLA_VXLAN_LINK, dst->remote_ifindex))
1916: 66 83 f8 0a cmp $0xa,%ax
191a: 0f 85 19 fc ff ff jne 1539 <vxlan_fill_info+0xa9>
1920: 48 8b 83 5c 09 00 00 mov 0x95c(%rbx),%rax
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_u32(struct sk_buff *skb, int attrtype, u32 value)
{
return nla_put(skb, attrtype, sizeof(u32), &value);
1927: 48 0b 83 64 09 00 00 or 0x964(%rbx),%rax
192e: 0f 85 19 fc ff ff jne 154d <vxlan_fill_info+0xbd>
1934: e9 35 fc ff ff jmpq 156e <vxlan_fill_info+0xde>
1939: 48 8b 83 88 08 00 00 mov 0x888(%rbx),%rax
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
static inline bool vxlan_addr_any(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
1940: 48 0b 83 90 08 00 00 or 0x890(%rbx),%rax
1947: 0f 85 7a ff ff ff jne 18c7 <vxlan_fill_info+0x437>
194d: e9 c8 fb ff ff jmpq 151a <vxlan_fill_info+0x8a>
}
if (dst->remote_ifindex && nla_put_u32(skb, IFLA_VXLAN_LINK, dst->remote_ifindex))
goto nla_put_failure;
if (!vxlan_addr_any(&vxlan->cfg.saddr)) {
1952: 48 8d 4d cc lea -0x34(%rbp),%rcx
1956: 89 55 cc mov %edx,-0x34(%rbp)
1959: be 04 00 00 00 mov $0x4,%esi
195e: ba 04 00 00 00 mov $0x4,%edx
1963: 4c 89 e7 mov %r12,%rdi
1966: e8 00 00 00 00 callq 196b <vxlan_fill_info+0x4db>
};
if (nla_put_u32(skb, IFLA_VXLAN_ID, be32_to_cpu(dst->remote_vni)))
goto nla_put_failure;
if (!vxlan_addr_any(&dst->remote_ip)) {
196b: 85 c0 test %eax,%eax
196d: 0f 84 fb fb ff ff je 156e <vxlan_fill_info+0xde>
1973: e9 1e ff ff ff jmpq 1896 <vxlan_fill_info+0x406>
1978: 48 8d 4d c4 lea -0x3c(%rbp),%rcx
197c: 89 55 c4 mov %edx,-0x3c(%rbp)
197f: be 02 00 00 00 mov $0x2,%esi
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_be32(struct sk_buff *skb, int attrtype, __be32 value)
{
return nla_put(skb, attrtype, sizeof(__be32), &value);
1984: ba 04 00 00 00 mov $0x4,%edx
1989: 4c 89 e7 mov %r12,%rdi
198c: e8 00 00 00 00 callq 1991 <vxlan_fill_info+0x501>
1991: 85 c0 test %eax,%eax
1993: 0f 84 81 fb ff ff je 151a <vxlan_fill_info+0x8a>
1999: e9 f8 fe ff ff jmpq 1896 <vxlan_fill_info+0x406>
if (dst->remote_ifindex && nla_put_u32(skb, IFLA_VXLAN_LINK, dst->remote_ifindex))
goto nla_put_failure;
if (!vxlan_addr_any(&vxlan->cfg.saddr)) {
if (vxlan->cfg.saddr.sa.sa_family == AF_INET) {
if (nla_put_in_addr(skb, IFLA_VXLAN_LOCAL,
199e: 31 c9 xor %ecx,%ecx
19a0: 31 d2 xor %edx,%edx
19a2: be 17 00 00 00 mov $0x17,%esi
19a7: 4c 89 e7 mov %r12,%rdi
19aa: e8 00 00 00 00 callq 19af <vxlan_fill_info+0x51f>
19af: 85 c0 test %eax,%eax
19b1: 0f 85 df fe ff ff jne 1896 <vxlan_fill_info+0x406>
19b7: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
19bd: e9 ae fe ff ff jmpq 1870 <vxlan_fill_info+0x3e0>
if (nla_put_u32(skb, IFLA_VXLAN_ID, be32_to_cpu(dst->remote_vni)))
goto nla_put_failure;
if (!vxlan_addr_any(&dst->remote_ip)) {
if (dst->remote_ip.sa.sa_family == AF_INET) {
if (nla_put_in_addr(skb, IFLA_VXLAN_GROUP,
19c2: 31 c9 xor %ecx,%ecx
19c4: 31 d2 xor %edx,%edx
19c6: be 1b 00 00 00 mov $0x1b,%esi
19cb: 4c 89 e7 mov %r12,%rdi
* @skb: socket buffer to add attribute to
* @attrtype: attribute type
*/
static inline int nla_put_flag(struct sk_buff *skb, int attrtype)
{
return nla_put(skb, attrtype, 0, NULL);
19ce: e8 00 00 00 00 callq 19d3 <vxlan_fill_info+0x543>
19d3: 85 c0 test %eax,%eax
19d5: 0f 85 bb fe ff ff jne 1896 <vxlan_fill_info+0x406>
19db: 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%eax
goto nla_put_failure;
if (nla_put(skb, IFLA_VXLAN_PORT_RANGE, sizeof(ports), &ports))
goto nla_put_failure;
if (vxlan->flags & VXLAN_F_GBP &&
19e1: e9 93 fe ff ff jmpq 1879 <vxlan_fill_info+0x3e9>
19e6: e8 00 00 00 00 callq 19eb <vxlan_fill_info+0x55b>
19eb: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
00000000000019f0 <vxlan_fdb_info>:
19f0: e8 00 00 00 00 callq 19f5 <vxlan_fdb_info+0x5>
19f5: 55 push %rbp
19f6: 48 89 e5 mov %rsp,%rbp
19f9: 41 57 push %r15
19fb: 41 56 push %r14
19fd: 41 55 push %r13
19ff: 41 54 push %r12
1a01: 53 push %rbx
1a02: 48 83 ec 40 sub $0x40,%rsp
nla_put_flag(skb, IFLA_VXLAN_GBP))
goto nla_put_failure;
if (vxlan->flags & VXLAN_F_GPE &&
1a06: 44 8b 97 84 00 00 00 mov 0x84(%rdi),%r10d
1a0d: 4c 8b 75 18 mov 0x18(%rbp),%r14
1a11: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
1a18: 00 00
return 0;
nla_put_failure:
return -EMSGSIZE;
}
1a1a: 48 89 45 d0 mov %rax,-0x30(%rbp)
1a1e: 31 c0 xor %eax,%eax
/* Fill in neighbour message in skbuff. */
static int vxlan_fdb_info(struct sk_buff *skb, struct vxlan_dev *vxlan,
const struct vxlan_fdb *fdb,
u32 portid, u32 seq, int type, unsigned int flags,
const struct vxlan_rdst *rdst)
{
1a20: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 1a27 <vxlan_fdb_info+0x37>
1a27: 45 85 d2 test %r10d,%r10d
1a2a: 48 89 45 a0 mov %rax,-0x60(%rbp)
1a2e: 0f 85 34 02 00 00 jne 1c68 <vxlan_fdb_info+0x278>
1a34: 8b 87 cc 00 00 00 mov 0xcc(%rdi),%eax
*
* Return the number of bytes of free space at the tail of an sk_buff
*/
static inline int skb_tailroom(const struct sk_buff *skb)
{
return skb_is_nonlinear(skb) ? 0 : skb->end - skb->tail;
1a3a: 2b 87 c8 00 00 00 sub 0xc8(%rdi),%eax
1a40: 49 89 fc mov %rdi,%r12
1a43: 83 f8 1b cmp $0x1b,%eax
1a46: 0f 8e 1c 02 00 00 jle 1c68 <vxlan_fdb_info+0x278>
1a4c: 45 89 cb mov %r9d,%r11d
1a4f: 44 8b 4d 10 mov 0x10(%rbp),%r9d
unsigned long now = jiffies;
1a53: 49 89 f7 mov %rsi,%r15
1a56: 49 89 d5 mov %rdx,%r13
1a59: 89 ce mov %ecx,%esi
1a5b: 44 89 c2 mov %r8d,%edx
1a5e: 44 89 d9 mov %r11d,%ecx
1a61: 41 b8 0c 00 00 00 mov $0xc,%r8d
* the message header and payload.
*/
static inline struct nlmsghdr *nlmsg_put(struct sk_buff *skb, u32 portid, u32 seq,
int type, int payload, int flags)
{
if (unlikely(skb_tailroom(skb) < nlmsg_total_size(payload)))
1a67: 44 89 5d 9c mov %r11d,-0x64(%rbp)
1a6b: e8 00 00 00 00 callq 1a70 <vxlan_fdb_info+0x80>
1a70: 48 85 c0 test %rax,%rax
1a73: 48 89 c3 mov %rax,%rbx
1a76: 0f 84 ec 01 00 00 je 1c68 <vxlan_fdb_info+0x278>
1a7c: 44 8b 5d 9c mov -0x64(%rbp),%r11d
return NULL;
return __nlmsg_put(skb, portid, seq, type, payload, flags);
1a80: 48 c7 40 10 00 00 00 movq $0x0,0x10(%rax)
1a87: 00
1a88: c7 40 18 00 00 00 00 movl $0x0,0x18(%rax)
1a8f: 41 83 fb 1e cmp $0x1e,%r11d
1a93: 0f 84 22 02 00 00 je 1cbb <vxlan_fdb_info+0x2cb>
1a99: c6 40 10 07 movb $0x7,0x10(%rax)
1a9d: 41 b9 01 00 00 00 mov $0x1,%r9d
1aa3: 41 b8 01 00 00 00 mov $0x1,%r8d
struct nlmsghdr *nlh;
struct ndmsg *ndm;
bool send_ip, send_eth;
nlh = nlmsg_put(skb, portid, seq, type, sizeof(*ndm), flags);
if (nlh == NULL)
1aa9: 41 0f b7 45 46 movzwl 0x46(%r13),%eax
ndm = nlmsg_data(nlh);
memset(ndm, 0, sizeof(*ndm));
send_eth = send_ip = true;
if (type == RTM_GETNEIGH) {
1aae: 66 89 43 18 mov %ax,0x18(%rbx)
nlh = nlmsg_put(skb, portid, seq, type, sizeof(*ndm), flags);
if (nlh == NULL)
return -EMSGSIZE;
ndm = nlmsg_data(nlh);
memset(ndm, 0, sizeof(*ndm));
1ab2: 49 8b 47 30 mov 0x30(%r15),%rax
1ab6: 8b 80 28 01 00 00 mov 0x128(%rax),%eax
1abc: 89 43 14 mov %eax,0x14(%rbx)
send_eth = send_ip = true;
if (type == RTM_GETNEIGH) {
1abf: 41 0f b6 45 48 movzbl 0x48(%r13),%eax
1ac4: c6 43 1b 01 movb $0x1,0x1b(%rbx)
1ac8: 88 43 1a mov %al,0x1a(%rbx)
ndm->ndm_family = AF_INET;
send_ip = !vxlan_addr_any(&rdst->remote_ip);
send_eth = !is_zero_ether_addr(fdb->eth_addr);
} else
ndm->ndm_family = AF_BRIDGE;
1acb: 49 8b 47 30 mov 0x30(%r15),%rax
return -EMSGSIZE;
ndm = nlmsg_data(nlh);
memset(ndm, 0, sizeof(*ndm));
send_eth = send_ip = true;
1acf: 49 8b 77 38 mov 0x38(%r15),%rsi
1ad3: 48 8b b8 80 04 00 00 mov 0x480(%rax),%rdi
ndm->ndm_family = AF_INET;
send_ip = !vxlan_addr_any(&rdst->remote_ip);
send_eth = !is_zero_ether_addr(fdb->eth_addr);
} else
ndm->ndm_family = AF_BRIDGE;
ndm->ndm_state = fdb->state;
1ada: 48 39 fe cmp %rdi,%rsi
1add: 74 38 je 1b17 <vxlan_fdb_info+0x127>
1adf: 44 88 4d 9b mov %r9b,-0x65(%rbp)
ndm->ndm_ifindex = vxlan->dev->ifindex;
1ae3: 44 88 45 9c mov %r8b,-0x64(%rbp)
1ae7: e8 00 00 00 00 callq 1aec <vxlan_fdb_info+0xfc>
1aec: 48 8d 4d b0 lea -0x50(%rbp),%rcx
ndm->ndm_flags = fdb->flags;
1af0: ba 04 00 00 00 mov $0x4,%edx
ndm->ndm_type = RTN_UNICAST;
1af5: be 0a 00 00 00 mov $0xa,%esi
send_eth = !is_zero_ether_addr(fdb->eth_addr);
} else
ndm->ndm_family = AF_BRIDGE;
ndm->ndm_state = fdb->state;
ndm->ndm_ifindex = vxlan->dev->ifindex;
ndm->ndm_flags = fdb->flags;
1afa: 4c 89 e7 mov %r12,%rdi
1afd: 89 45 b0 mov %eax,-0x50(%rbp)
ndm->ndm_type = RTN_UNICAST;
if (!net_eq(dev_net(vxlan->dev), vxlan->net) &&
1b00: e8 00 00 00 00 callq 1b05 <vxlan_fdb_info+0x115>
1b05: 85 c0 test %eax,%eax
1b07: 44 0f b6 45 9c movzbl -0x64(%rbp),%r8d
1b0c: 44 0f b6 4d 9b movzbl -0x65(%rbp),%r9d
1b11: 0f 85 32 01 00 00 jne 1c49 <vxlan_fdb_info+0x259>
nla_put_s32(skb, NDA_LINK_NETNSID,
1b17: 45 84 c9 test %r9b,%r9b
1b1a: 0f 85 72 01 00 00 jne 1c92 <vxlan_fdb_info+0x2a2>
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_s32(struct sk_buff *skb, int attrtype, s32 value)
{
return nla_put(skb, attrtype, sizeof(s32), &value);
1b20: 45 84 c0 test %r8b,%r8b
1b23: 0f 85 f0 00 00 00 jne 1c19 <vxlan_fdb_info+0x229>
1b29: 41 0f b7 46 1c movzwl 0x1c(%r14),%eax
1b2e: 66 85 c0 test %ax,%ax
1b31: 74 2c je 1b5f <vxlan_fdb_info+0x16f>
1b33: 66 41 3b 87 3c 01 00 cmp 0x13c(%r15),%ax
1b3a: 00
ndm->ndm_state = fdb->state;
ndm->ndm_ifindex = vxlan->dev->ifindex;
ndm->ndm_flags = fdb->flags;
ndm->ndm_type = RTN_UNICAST;
if (!net_eq(dev_net(vxlan->dev), vxlan->net) &&
1b3b: 74 22 je 1b5f <vxlan_fdb_info+0x16f>
1b3d: 48 8d 4d ae lea -0x52(%rbp),%rcx
1b41: ba 02 00 00 00 mov $0x2,%edx
1b46: be 06 00 00 00 mov $0x6,%esi
nla_put_s32(skb, NDA_LINK_NETNSID,
peernet2id_alloc(dev_net(vxlan->dev), vxlan->net)))
goto nla_put_failure;
if (send_eth && nla_put(skb, NDA_LLADDR, ETH_ALEN, &fdb->eth_addr))
1b4b: 4c 89 e7 mov %r12,%rdi
1b4e: 66 89 45 ae mov %ax,-0x52(%rbp)
goto nla_put_failure;
if (send_ip && vxlan_nla_put_addr(skb, NDA_DST, &rdst->remote_ip))
1b52: e8 00 00 00 00 callq 1b57 <vxlan_fdb_info+0x167>
1b57: 85 c0 test %eax,%eax
goto nla_put_failure;
if (rdst->remote_port && rdst->remote_port != vxlan->cfg.dst_port &&
1b59: 0f 85 ea 00 00 00 jne 1c49 <vxlan_fdb_info+0x259>
1b5f: 41 8b 46 20 mov 0x20(%r14),%eax
1b63: 41 3b 47 60 cmp 0x60(%r15),%eax
1b67: 74 23 je 1b8c <vxlan_fdb_info+0x19c>
1b69: 48 8d 4d b4 lea -0x4c(%rbp),%rcx
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_be16(struct sk_buff *skb, int attrtype, __be16 value)
{
return nla_put(skb, attrtype, sizeof(__be16), &value);
1b6d: 0f c8 bswap %eax
1b6f: ba 04 00 00 00 mov $0x4,%edx
1b74: be 07 00 00 00 mov $0x7,%esi
1b79: 4c 89 e7 mov %r12,%rdi
1b7c: 89 45 b4 mov %eax,-0x4c(%rbp)
1b7f: e8 00 00 00 00 callq 1b84 <vxlan_fdb_info+0x194>
1b84: 85 c0 test %eax,%eax
1b86: 0f 85 bd 00 00 00 jne 1c49 <vxlan_fdb_info+0x259>
1b8c: 41 8b 46 24 mov 0x24(%r14),%eax
nla_put_be16(skb, NDA_PORT, rdst->remote_port))
goto nla_put_failure;
if (rdst->remote_vni != vxlan->default_dst.remote_vni &&
1b90: 85 c0 test %eax,%eax
1b92: 0f 85 d7 00 00 00 jne 1c6f <vxlan_fdb_info+0x27f>
1b98: 4c 8b 75 a0 mov -0x60(%rbp),%r14
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_u32(struct sk_buff *skb, int attrtype, u32 value)
{
return nla_put(skb, attrtype, sizeof(u32), &value);
1b9c: 4c 89 f7 mov %r14,%rdi
1b9f: 49 2b 7d 28 sub 0x28(%r13),%rdi
1ba3: e8 00 00 00 00 callq 1ba8 <vxlan_fdb_info+0x1b8>
1ba8: 4c 89 f7 mov %r14,%rdi
1bab: 49 2b 7d 20 sub 0x20(%r13),%rdi
1baf: 89 45 c4 mov %eax,-0x3c(%rbp)
1bb2: c7 45 c0 00 00 00 00 movl $0x0,-0x40(%rbp)
1bb9: e8 00 00 00 00 callq 1bbe <vxlan_fdb_info+0x1ce>
nla_put_u32(skb, NDA_VNI, be32_to_cpu(rdst->remote_vni)))
goto nla_put_failure;
if (rdst->remote_ifindex &&
1bbe: 48 8d 4d c0 lea -0x40(%rbp),%rcx
1bc2: ba 10 00 00 00 mov $0x10,%edx
1bc7: be 03 00 00 00 mov $0x3,%esi
nla_put_u32(skb, NDA_IFINDEX, rdst->remote_ifindex))
goto nla_put_failure;
ci.ndm_used = jiffies_to_clock_t(now - fdb->used);
1bcc: 4c 89 e7 mov %r12,%rdi
1bcf: 89 45 c8 mov %eax,-0x38(%rbp)
1bd2: c7 45 cc 00 00 00 00 movl $0x0,-0x34(%rbp)
ci.ndm_confirmed = 0;
ci.ndm_updated = jiffies_to_clock_t(now - fdb->updated);
1bd9: e8 00 00 00 00 callq 1bde <vxlan_fdb_info+0x1ee>
1bde: 85 c0 test %eax,%eax
goto nla_put_failure;
if (rdst->remote_ifindex &&
nla_put_u32(skb, NDA_IFINDEX, rdst->remote_ifindex))
goto nla_put_failure;
ci.ndm_used = jiffies_to_clock_t(now - fdb->used);
1be0: 75 67 jne 1c49 <vxlan_fdb_info+0x259>
ci.ndm_confirmed = 0;
1be2: 41 8b 94 24 c8 00 00 mov 0xc8(%r12),%edx
1be9: 00
ci.ndm_updated = jiffies_to_clock_t(now - fdb->updated);
1bea: 49 03 94 24 d0 00 00 add 0xd0(%r12),%rdx
1bf1: 00
ci.ndm_refcnt = 0;
if (nla_put(skb, NDA_CACHEINFO, sizeof(ci), &ci))
1bf2: 48 29 da sub %rbx,%rdx
1bf5: 89 13 mov %edx,(%rbx)
1bf7: 48 8b 4d d0 mov -0x30(%rbp),%rcx
1bfb: 65 48 33 0c 25 28 00 xor %gs:0x28,%rcx
1c02: 00 00
goto nla_put_failure;
ci.ndm_used = jiffies_to_clock_t(now - fdb->used);
ci.ndm_confirmed = 0;
ci.ndm_updated = jiffies_to_clock_t(now - fdb->updated);
ci.ndm_refcnt = 0;
1c04: 0f 85 23 01 00 00 jne 1d2d <vxlan_fdb_info+0x33d>
if (nla_put(skb, NDA_CACHEINFO, sizeof(ci), &ci))
1c0a: 48 83 c4 40 add $0x40,%rsp
1c0e: 5b pop %rbx
1c0f: 41 5c pop %r12
1c11: 41 5d pop %r13
* attributes. Only necessary if attributes have been added to
* the message.
*/
static inline void nlmsg_end(struct sk_buff *skb, struct nlmsghdr *nlh)
{
nlh->nlmsg_len = skb_tail_pointer(skb) - (unsigned char *)nlh;
1c13: 41 5e pop %r14
1c15: 41 5f pop %r15
1c17: 5d pop %rbp
1c18: c3 retq
1c19: 66 41 83 3e 0a cmpw $0xa,(%r14)
1c1e: 0f 84 d0 00 00 00 je 1cf4 <vxlan_fdb_info+0x304>
1c24: 41 8b 46 04 mov 0x4(%r14),%eax
return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
return -EMSGSIZE;
}
1c28: 48 8d 4d bc lea -0x44(%rbp),%rcx
1c2c: ba 04 00 00 00 mov $0x4,%edx
1c31: be 01 00 00 00 mov $0x1,%esi
1c36: 4c 89 e7 mov %r12,%rdi
1c39: 89 45 bc mov %eax,-0x44(%rbp)
1c3c: e8 00 00 00 00 callq 1c41 <vxlan_fdb_info+0x251>
1c41: 85 c0 test %eax,%eax
1c43: 0f 84 e0 fe ff ff je 1b29 <vxlan_fdb_info+0x139>
}
static int vxlan_nla_put_addr(struct sk_buff *skb, int attr,
const union vxlan_addr *ip)
{
if (ip->sa.sa_family == AF_INET6)
1c49: 49 8b 84 24 d8 00 00 mov 0xd8(%r12),%rax
1c50: 00
1c51: 48 39 d8 cmp %rbx,%rax
1c54: 0f 87 b5 00 00 00 ja 1d0f <vxlan_fdb_info+0x31f>
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_be32(struct sk_buff *skb, int attrtype, __be32 value)
{
return nla_put(skb, attrtype, sizeof(__be32), &value);
1c5a: 48 29 c3 sub %rax,%rbx
1c5d: 4c 89 e7 mov %r12,%rdi
1c60: 48 89 de mov %rbx,%rsi
1c63: e8 00 00 00 00 callq 1c68 <vxlan_fdb_info+0x278>
1c68: b8 a6 ff ff ff mov $0xffffffa6,%eax
1c6d: eb 88 jmp 1bf7 <vxlan_fdb_info+0x207>
1c6f: 48 8d 4d b8 lea -0x48(%rbp),%rcx
goto nla_put_failure;
if (send_eth && nla_put(skb, NDA_LLADDR, ETH_ALEN, &fdb->eth_addr))
goto nla_put_failure;
if (send_ip && vxlan_nla_put_addr(skb, NDA_DST, &rdst->remote_ip))
1c73: ba 04 00 00 00 mov $0x4,%edx
1c78: be 08 00 00 00 mov $0x8,%esi
* Trims the message to the provided mark.
*/
static inline void nlmsg_trim(struct sk_buff *skb, const void *mark)
{
if (mark) {
WARN_ON((unsigned char *) mark < skb->data);
1c7d: 4c 89 e7 mov %r12,%rdi
1c80: 89 45 b8 mov %eax,-0x48(%rbp)
1c83: e8 00 00 00 00 callq 1c88 <vxlan_fdb_info+0x298>
1c88: 85 c0 test %eax,%eax
skb_trim(skb, (unsigned char *) mark - skb->data);
1c8a: 0f 84 08 ff ff ff je 1b98 <vxlan_fdb_info+0x1a8>
1c90: eb b7 jmp 1c49 <vxlan_fdb_info+0x259>
1c92: 49 8d 4d 40 lea 0x40(%r13),%rcx
1c96: ba 06 00 00 00 mov $0x6,%edx
nlmsg_end(skb, nlh);
return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
return -EMSGSIZE;
1c9b: be 02 00 00 00 mov $0x2,%esi
* @attrtype: attribute type
* @value: numeric value
*/
static inline int nla_put_u32(struct sk_buff *skb, int attrtype, u32 value)
{
return nla_put(skb, attrtype, sizeof(u32), &value);
1ca0: 4c 89 e7 mov %r12,%rdi
1ca3: 44 88 45 9c mov %r8b,-0x64(%rbp)
1ca7: e8 00 00 00 00 callq 1cac <vxlan_fdb_info+0x2bc>
1cac: 85 c0 test %eax,%eax
1cae: 44 0f b6 45 9c movzbl -0x64(%rbp),%r8d
1cb3: 0f 84 67 fe ff ff je 1b20 <vxlan_fdb_info+0x130>
nla_put_be16(skb, NDA_PORT, rdst->remote_port))
goto nla_put_failure;
if (rdst->remote_vni != vxlan->default_dst.remote_vni &&
nla_put_u32(skb, NDA_VNI, be32_to_cpu(rdst->remote_vni)))
goto nla_put_failure;
if (rdst->remote_ifindex &&
1cb9: eb 8e jmp 1c49 <vxlan_fdb_info+0x259>
1cbb: c6 40 10 02 movb $0x2,0x10(%rax)
1cbf: 66 41 83 3e 0a cmpw $0xa,(%r14)
if (!net_eq(dev_net(vxlan->dev), vxlan->net) &&
nla_put_s32(skb, NDA_LINK_NETNSID,
peernet2id_alloc(dev_net(vxlan->dev), vxlan->net)))
goto nla_put_failure;
if (send_eth && nla_put(skb, NDA_LLADDR, ETH_ALEN, &fdb->eth_addr))
1cc4: 74 20 je 1ce6 <vxlan_fdb_info+0x2f6>
1cc6: 41 8b 46 04 mov 0x4(%r14),%eax
1cca: 85 c0 test %eax,%eax
1ccc: 41 0f 94 c0 sete %r8b
1cd0: 41 0f b7 45 44 movzwl 0x44(%r13),%eax
1cd5: 41 83 f0 01 xor $0x1,%r8d
1cd9: 41 0b 45 40 or 0x40(%r13),%eax
1cdd: 41 0f 95 c1 setne %r9b
1ce1: e9 c3 fd ff ff jmpq 1aa9 <vxlan_fdb_info+0xb9>
1ce6: 49 8b 46 08 mov 0x8(%r14),%rax
1cea: 49 0b 46 10 or 0x10(%r14),%rax
memset(ndm, 0, sizeof(*ndm));
send_eth = send_ip = true;
if (type == RTM_GETNEIGH) {
ndm->ndm_family = AF_INET;
1cee: 41 0f 94 c0 sete %r8b
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
static inline bool vxlan_addr_any(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
1cf2: eb dc jmp 1cd0 <vxlan_fdb_info+0x2e0>
1cf4: 49 8d 4e 08 lea 0x8(%r14),%rcx
return ipv6_addr_any(&ipa->sin6.sin6_addr);
else
return ipa->sin.sin_addr.s_addr == htonl(INADDR_ANY);
1cf8: ba 10 00 00 00 mov $0x10,%edx
1cfd: be 01 00 00 00 mov $0x1,%esi
send_eth = send_ip = true;
if (type == RTM_GETNEIGH) {
ndm->ndm_family = AF_INET;
send_ip = !vxlan_addr_any(&rdst->remote_ip);
send_eth = !is_zero_ether_addr(fdb->eth_addr);
1d02: 4c 89 e7 mov %r12,%rdi
send_eth = send_ip = true;
if (type == RTM_GETNEIGH) {
ndm->ndm_family = AF_INET;
send_ip = !vxlan_addr_any(&rdst->remote_ip);
1d05: e8 00 00 00 00 callq 1d0a <vxlan_fdb_info+0x31a>
send_eth = !is_zero_ether_addr(fdb->eth_addr);
1d0a: e9 32 ff ff ff jmpq 1c41 <vxlan_fdb_info+0x251>
1d0f: be 16 02 00 00 mov $0x216,%esi
1d14: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
static inline bool ipv6_addr_any(const struct in6_addr *a)
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
const unsigned long *ul = (const unsigned long *)a;
return (ul[0] | ul[1]) == 0UL;
1d1b: e8 00 00 00 00 callq 1d20 <vxlan_fdb_info+0x330>
1d20: 49 8b 84 24 d8 00 00 mov 0xd8(%r12),%rax
1d27: 00
* @addr: IPv6 address
*/
static inline int nla_put_in6_addr(struct sk_buff *skb, int attrtype,
const struct in6_addr *addr)
{
return nla_put(skb, attrtype, sizeof(*addr), addr);
1d28: e9 2d ff ff ff jmpq 1c5a <vxlan_fdb_info+0x26a>
1d2d: e8 00 00 00 00 callq 1d32 <vxlan_fdb_info+0x342>
1d32: 0f 1f 40 00 nopl 0x0(%rax)
1d36: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
1d3d: 00 00 00
0000000000001d40 <vxlan_fdb_notify>:
* Trims the message to the provided mark.
*/
static inline void nlmsg_trim(struct sk_buff *skb, const void *mark)
{
if (mark) {
WARN_ON((unsigned char *) mark < skb->data);
1d40: e8 00 00 00 00 callq 1d45 <vxlan_fdb_notify+0x5>
1d45: 55 push %rbp
1d46: 48 89 e5 mov %rsp,%rbp
1d49: 41 57 push %r15
1d4b: 41 56 push %r14
1d4d: 41 55 push %r13
1d4f: 41 54 push %r12
1d51: 49 89 f6 mov %rsi,%r14
1d54: 53 push %rbx
1d55: 49 89 fc mov %rdi,%r12
1d58: 49 89 d7 mov %rdx,%r15
1d5b: be 20 00 08 02 mov $0x2080020,%esi
return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
return -EMSGSIZE;
}
1d60: 31 d2 xor %edx,%edx
1d62: 48 83 ec 18 sub $0x18,%rsp
1d66: 48 8b 47 30 mov 0x30(%rdi),%rax
1d6a: 89 4d d4 mov %ecx,-0x2c(%rbp)
1d6d: bf 70 00 00 00 mov $0x70,%edi
+ nla_total_size(sizeof(struct nda_cacheinfo));
}
static void vxlan_fdb_notify(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb,
struct vxlan_rdst *rd, int type)
{
1d72: b9 ff ff ff ff mov $0xffffffff,%ecx
1d77: 4c 8b a8 80 04 00 00 mov 0x480(%rax),%r13
1d7e: e8 00 00 00 00 callq 1d83 <vxlan_fdb_notify+0x43>
1d83: 48 85 c0 test %rax,%rax
1d86: 0f 84 83 00 00 00 je 1e0f <vxlan_fdb_notify+0xcf>
struct sk_buff *__build_skb(void *data, unsigned int frag_size);
struct sk_buff *build_skb(void *data, unsigned int frag_size);
static inline struct sk_buff *alloc_skb(unsigned int size,
gfp_t priority)
{
return __alloc_skb(size, priority, 0, NUMA_NO_NODE);
1d8c: 44 8b 4d d4 mov -0x2c(%rbp),%r9d
1d90: 45 31 c0 xor %r8d,%r8d
1d93: 31 c9 xor %ecx,%ecx
1d95: 4c 89 e6 mov %r12,%rsi
1d98: 4c 89 7c 24 08 mov %r15,0x8(%rsp)
1d9d: c7 04 24 00 00 00 00 movl $0x0,(%rsp)
1da4: 4c 89 f2 mov %r14,%rdx
1da7: 48 89 c7 mov %rax,%rdi
1daa: 48 89 c3 mov %rax,%rbx
1dad: e8 3e fc ff ff callq 19f0 <vxlan_fdb_info>
1db2: 85 c0 test %eax,%eax
struct net *net = dev_net(vxlan->dev);
struct sk_buff *skb;
int err = -ENOBUFS;
skb = nlmsg_new(vxlan_nlmsg_size(), GFP_ATOMIC);
if (skb == NULL)
1db4: 41 89 c4 mov %eax,%r12d
1db7: 79 2c jns 1de5 <vxlan_fdb_notify+0xa5>
1db9: 83 f8 a6 cmp $0xffffffa6,%eax
goto errout;
err = vxlan_fdb_info(skb, vxlan, fdb, 0, 0, type, 0, rd);
1dbc: 74 59 je 1e17 <vxlan_fdb_notify+0xd7>
1dbe: 48 89 df mov %rbx,%rdi
1dc1: e8 00 00 00 00 callq 1dc6 <vxlan_fdb_notify+0x86>
1dc6: 44 89 e2 mov %r12d,%edx
1dc9: 4c 89 ef mov %r13,%rdi
1dcc: be 03 00 00 00 mov $0x3,%esi
1dd1: e8 00 00 00 00 callq 1dd6 <vxlan_fdb_notify+0x96>
1dd6: 48 83 c4 18 add $0x18,%rsp
1dda: 5b pop %rbx
1ddb: 41 5c pop %r12
1ddd: 41 5d pop %r13
1ddf: 41 5e pop %r14
1de1: 41 5f pop %r15
if (err < 0) {
1de3: 5d pop %rbp
skb = nlmsg_new(vxlan_nlmsg_size(), GFP_ATOMIC);
if (skb == NULL)
goto errout;
err = vxlan_fdb_info(skb, vxlan, fdb, 0, 0, type, 0, rd);
1de4: c3 retq
1de5: 4c 89 ee mov %r13,%rsi
if (err < 0) {
1de8: 48 89 df mov %rbx,%rdi
/* -EMSGSIZE implies BUG in vxlan_nlmsg_size() */
WARN_ON(err == -EMSGSIZE);
1deb: 41 b9 20 00 08 02 mov $0x2080020,%r9d
kfree_skb(skb);
1df1: 45 31 c0 xor %r8d,%r8d
1df4: b9 03 00 00 00 mov $0x3,%ecx
rtnl_notify(skb, net, 0, RTNLGRP_NEIGH, NULL, GFP_ATOMIC);
return;
errout:
if (err < 0)
rtnl_set_sk_err(net, RTNLGRP_NEIGH, err);
1df9: 31 d2 xor %edx,%edx
1dfb: e8 00 00 00 00 callq 1e00 <vxlan_fdb_notify+0xc0>
1e00: 48 83 c4 18 add $0x18,%rsp
1e04: 5b pop %rbx
1e05: 41 5c pop %r12
}
1e07: 41 5d pop %r13
1e09: 41 5e pop %r14
1e0b: 41 5f pop %r15
1e0d: 5d pop %rbp
1e0e: c3 retq
1e0f: 41 bc 97 ff ff ff mov $0xffffff97,%r12d
WARN_ON(err == -EMSGSIZE);
kfree_skb(skb);
goto errout;
}
rtnl_notify(skb, net, 0, RTNLGRP_NEIGH, NULL, GFP_ATOMIC);
1e15: eb af jmp 1dc6 <vxlan_fdb_notify+0x86>
1e17: be 60 01 00 00 mov $0x160,%esi
1e1c: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
1e23: e8 00 00 00 00 callq 1e28 <vxlan_fdb_notify+0xe8>
1e28: eb 94 jmp 1dbe <vxlan_fdb_notify+0x7e>
1e2a: 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1)
0000000000001e30 <vxlan_fdb_destroy>:
return;
errout:
if (err < 0)
rtnl_set_sk_err(net, RTNLGRP_NEIGH, err);
}
1e30: e8 00 00 00 00 callq 1e35 <vxlan_fdb_destroy+0x5>
1e35: 55 push %rbp
1e36: 48 89 e5 mov %rsp,%rbp
1e39: 41 54 push %r12
1e3b: 53 push %rbx
1e3c: 49 89 fc mov %rdi,%r12
static void vxlan_fdb_notify(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb,
struct vxlan_rdst *rd, int type)
{
struct net *net = dev_net(vxlan->dev);
struct sk_buff *skb;
int err = -ENOBUFS;
1e3f: 48 89 f3 mov %rsi,%rbx
1e42: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
goto errout;
err = vxlan_fdb_info(skb, vxlan, fdb, 0, 0, type, 0, rd);
if (err < 0) {
/* -EMSGSIZE implies BUG in vxlan_nlmsg_size() */
WARN_ON(err == -EMSGSIZE);
1e47: 41 83 ac 24 ec 00 00 subl $0x1,0xec(%r12)
1e4e: 00 01
1e50: b9 1d 00 00 00 mov $0x1d,%ecx
1e55: 48 89 de mov %rbx,%rsi
1e58: 48 8b 43 30 mov 0x30(%rbx),%rax
1e5c: 4c 89 e7 mov %r12,%rdi
1e5f: 48 8d 50 d8 lea -0x28(%rax),%rdx
}
kfree(f);
}
static void vxlan_fdb_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f)
{
1e63: e8 d8 fe ff ff callq 1d40 <vxlan_fdb_notify>
1e68: 48 8b 03 mov (%rbx),%rax
1e6b: 48 8b 53 08 mov 0x8(%rbx),%rdx
1e6f: 48 85 c0 test %rax,%rax
1e72: 48 89 02 mov %rax,(%rdx)
1e75: 74 04 je 1e7b <vxlan_fdb_destroy+0x4b>
netdev_dbg(vxlan->dev,
"delete %pM\n", f->eth_addr);
--vxlan->addrcnt;
1e77: 48 89 50 08 mov %rdx,0x8(%rax)
1e7b: 48 b8 00 02 00 00 00 movabs $0xdead000000000200,%rax
1e82: 00 ad de
vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_DELNEIGH);
1e85: 48 8d 7b 10 lea 0x10(%rbx),%rdi
1e89: 48 c7 c6 00 00 00 00 mov $0x0,%rsi
1e90: 48 89 43 08 mov %rax,0x8(%rbx)
1e94: e8 00 00 00 00 callq 1e99 <vxlan_fdb_destroy+0x69>
return !READ_ONCE(h->first);
}
static inline void __hlist_del(struct hlist_node *n)
{
struct hlist_node *next = n->next;
1e99: 5b pop %rbx
1e9a: 41 5c pop %r12
struct hlist_node **pprev = n->pprev;
1e9c: 5d pop %rbp
1e9d: c3 retq
1e9e: 48 8d 4e 40 lea 0x40(%rsi),%rcx
1ea2: 48 8b 77 30 mov 0x30(%rdi),%rsi
WRITE_ONCE(*pprev, next);
if (next)
1ea6: 48 c7 c2 00 00 00 00 mov $0x0,%rdx
* hlist_for_each_entry().
*/
static inline void hlist_del_rcu(struct hlist_node *n)
{
__hlist_del(n);
n->pprev = LIST_POISON2;
1ead: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
1eb4: e8 00 00 00 00 callq 1eb9 <vxlan_fdb_destroy+0x89>
hlist_del_rcu(&f->hlist);
call_rcu(&f->rcu, vxlan_fdb_free);
1eb9: eb 8c jmp 1e47 <vxlan_fdb_destroy+0x17>
1ebb: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
0000000000001ec0 <vxlan_cleanup>:
1ec0: e8 00 00 00 00 callq 1ec5 <vxlan_cleanup+0x5>
1ec5: 55 push %rbp
1ec6: 48 89 e5 mov %rsp,%rbp
}
1ec9: 41 57 push %r15
1ecb: 41 56 push %r14
1ecd: 41 55 push %r13
kfree(f);
}
static void vxlan_fdb_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f)
{
netdev_dbg(vxlan->dev,
1ecf: 41 54 push %r12
1ed1: 53 push %rbx
1ed2: 48 83 ec 10 sub $0x10,%rsp
1ed6: 48 8b 47 30 mov 0x30(%rdi),%rax
1eda: 4c 8b 25 00 00 00 00 mov 0x0(%rip),%r12 # 1ee1 <vxlan_cleanup+0x21>
1ee1: 48 8b 40 48 mov 0x48(%rax),%rax
1ee5: a8 01 test $0x1,%al
1ee7: 0f 84 b3 00 00 00 je 1fa0 <vxlan_cleanup+0xe0>
1eed: 48 8d 87 e8 00 00 00 lea 0xe8(%rdi),%rax
return NETDEV_TX_OK;
}
/* Walk the forwarding table and purge stale entries */
static void vxlan_cleanup(unsigned long arg)
{
1ef4: 4c 8d af 60 01 00 00 lea 0x160(%rdi),%r13
1efb: 48 89 fb mov %rdi,%rbx
1efe: 49 81 c4 c4 09 00 00 add $0x9c4,%r12
1f05: 48 89 45 d0 mov %rax,-0x30(%rbp)
struct vxlan_dev *vxlan = (struct vxlan_dev *) arg;
unsigned long next_timer = jiffies + FDB_AGE_INTERVAL;
unsigned int h;
if (!netif_running(vxlan->dev))
1f09: 48 8d 87 60 09 00 00 lea 0x960(%rdi),%rax
/* Walk the forwarding table and purge stale entries */
static void vxlan_cleanup(unsigned long arg)
{
struct vxlan_dev *vxlan = (struct vxlan_dev *) arg;
unsigned long next_timer = jiffies + FDB_AGE_INTERVAL;
1f10: 48 89 45 c8 mov %rax,-0x38(%rbp)
1f14: 48 8b 7d d0 mov -0x30(%rbp),%rdi
unsigned int h;
if (!netif_running(vxlan->dev))
1f18: e8 00 00 00 00 callq 1f1d <vxlan_cleanup+0x5d>
1f1d: 4d 8b 7d 00 mov 0x0(%r13),%r15
1f21: 4d 85 ff test %r15,%r15
1f24: 75 24 jne 1f4a <vxlan_cleanup+0x8a>
1f26: eb 56 jmp 1f7e <vxlan_cleanup+0xbe>
1f28: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
1f2d: b8 04 00 00 00 mov $0x4,%eax
/* Walk the forwarding table and purge stale entries */
static void vxlan_cleanup(unsigned long arg)
{
struct vxlan_dev *vxlan = (struct vxlan_dev *) arg;
unsigned long next_timer = jiffies + FDB_AGE_INTERVAL;
1f32: 4c 89 fe mov %r15,%rsi
1f35: 48 89 df mov %rbx,%rdi
1f38: 66 41 89 47 46 mov %ax,0x46(%r15)
1f3d: e8 ee fe ff ff callq 1e30 <vxlan_fdb_destroy>
1f42: 4d 85 f6 test %r14,%r14
}
static __always_inline void spin_lock_bh(spinlock_t *lock)
{
raw_spin_lock_bh(&lock->rlock);
1f45: 4d 89 f7 mov %r14,%r15
1f48: 74 34 je 1f7e <vxlan_cleanup+0xbe>
1f4a: 41 f6 47 46 80 testb $0x80,0x46(%r15)
for (h = 0; h < FDB_HASH_SIZE; ++h) {
struct hlist_node *p, *n;
spin_lock_bh(&vxlan->hash_lock);
hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
1f4f: 4d 8b 37 mov (%r15),%r14
1f52: 75 ee jne 1f42 <vxlan_cleanup+0x82>
1f54: 48 69 93 50 01 00 00 imul $0xfa,0x150(%rbx),%rdx
1f5b: fa 00 00 00
timeout = f->used + vxlan->cfg.age_interval * HZ;
if (time_before_eq(timeout, jiffies)) {
netdev_dbg(vxlan->dev,
"garbage collect %pM\n",
f->eth_addr);
f->state = NUD_STALE;
1f5f: 48 8b 0d 00 00 00 00 mov 0x0(%rip),%rcx # 1f66 <vxlan_cleanup+0xa6>
vxlan_fdb_destroy(vxlan, f);
1f66: 49 03 57 28 add 0x28(%r15),%rdx
timeout = f->used + vxlan->cfg.age_interval * HZ;
if (time_before_eq(timeout, jiffies)) {
netdev_dbg(vxlan->dev,
"garbage collect %pM\n",
f->eth_addr);
f->state = NUD_STALE;
1f6a: 48 39 d1 cmp %rdx,%rcx
vxlan_fdb_destroy(vxlan, f);
1f6d: 79 b9 jns 1f28 <vxlan_cleanup+0x68>
1f6f: 4c 39 e2 cmp %r12,%rdx
for (h = 0; h < FDB_HASH_SIZE; ++h) {
struct hlist_node *p, *n;
spin_lock_bh(&vxlan->hash_lock);
hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
1f72: 4d 89 f7 mov %r14,%r15
1f75: 4c 0f 48 e2 cmovs %rdx,%r12
1f79: 4d 85 f6 test %r14,%r14
struct vxlan_fdb *f
= container_of(p, struct vxlan_fdb, hlist);
unsigned long timeout;
if (f->state & NUD_PERMANENT)
1f7c: 75 cc jne 1f4a <vxlan_cleanup+0x8a>
1f7e: 48 8b 7d d0 mov -0x30(%rbp),%rdi
1f82: 49 83 c5 08 add $0x8,%r13
continue;
timeout = f->used + vxlan->cfg.age_interval * HZ;
1f86: e8 00 00 00 00 callq 1f8b <vxlan_cleanup+0xcb>
1f8b: 4c 3b 6d c8 cmp -0x38(%rbp),%r13
if (time_before_eq(timeout, jiffies)) {
1f8f: 75 83 jne 1f14 <vxlan_cleanup+0x54>
1f91: 48 8d bb a0 00 00 00 lea 0xa0(%rbx),%rdi
unsigned long timeout;
if (f->state & NUD_PERMANENT)
continue;
timeout = f->used + vxlan->cfg.age_interval * HZ;
1f98: 4c 89 e6 mov %r12,%rsi
if (time_before_eq(timeout, jiffies)) {
1f9b: e8 00 00 00 00 callq 1fa0 <vxlan_cleanup+0xe0>
"garbage collect %pM\n",
f->eth_addr);
f->state = NUD_STALE;
vxlan_fdb_destroy(vxlan, f);
} else if (time_before(timeout, next_timer))
next_timer = timeout;
1fa0: 48 83 c4 10 add $0x10,%rsp
1fa4: 5b pop %rbx
1fa5: 41 5c pop %r12
1fa7: 41 5d pop %r13
for (h = 0; h < FDB_HASH_SIZE; ++h) {
struct hlist_node *p, *n;
spin_lock_bh(&vxlan->hash_lock);
hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
1fa9: 41 5e pop %r14
1fab: 41 5f pop %r15
1fad: 5d pop %rbp
raw_spin_unlock(&lock->rlock);
}
static __always_inline void spin_unlock_bh(spinlock_t *lock)
{
raw_spin_unlock_bh(&lock->rlock);
1fae: c3 retq
1faf: 48 8b 73 30 mov 0x30(%rbx),%rsi
1fb3: 49 8d 4f 40 lea 0x40(%r15),%rcx
1fb7: 48 c7 c2 00 00 00 00 mov $0x0,%rdx
unsigned int h;
if (!netif_running(vxlan->dev))
return;
for (h = 0; h < FDB_HASH_SIZE; ++h) {
1fbe: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
next_timer = timeout;
}
spin_unlock_bh(&vxlan->hash_lock);
}
mod_timer(&vxlan->age_timer, next_timer);
1fc5: e8 00 00 00 00 callq 1fca <vxlan_cleanup+0x10a>
1fca: e9 5e ff ff ff jmpq 1f2d <vxlan_cleanup+0x6d>
1fcf: 90 nop
0000000000001fd0 <vxlan_fdb_delete_default>:
}
1fd0: e8 00 00 00 00 callq 1fd5 <vxlan_fdb_delete_default+0x5>
1fd5: 55 push %rbp
1fd6: 48 89 e5 mov %rsp,%rbp
1fd9: 41 54 push %r12
1fdb: 4c 8d a7 e8 00 00 00 lea 0xe8(%rdi),%r12
if (f->state & NUD_PERMANENT)
continue;
timeout = f->used + vxlan->cfg.age_interval * HZ;
if (time_before_eq(timeout, jiffies)) {
netdev_dbg(vxlan->dev,
1fe2: 53 push %rbx
1fe3: 48 89 fb mov %rdi,%rbx
1fe6: 4c 89 e7 mov %r12,%rdi
1fe9: e8 00 00 00 00 callq 1fee <vxlan_fdb_delete_default+0x1e>
1fee: 48 c7 c6 00 00 00 00 mov $0x0,%rsi
1ff5: 48 89 df mov %rbx,%rdi
1ff8: e8 03 e0 ff ff callq 0 <__vxlan_find_mac>
1ffd: 48 85 c0 test %rax,%rax
return 0;
}
static void vxlan_fdb_delete_default(struct vxlan_dev *vxlan)
{
2000: 74 0b je 200d <vxlan_fdb_delete_default+0x3d>
2002: 48 89 c6 mov %rax,%rsi
2005: 48 89 df mov %rbx,%rdi
2008: e8 23 fe ff ff callq 1e30 <vxlan_fdb_destroy>
raw_spin_lock(&lock->rlock);
}
static __always_inline void spin_lock_bh(spinlock_t *lock)
{
raw_spin_lock_bh(&lock->rlock);
200d: 4c 89 e7 mov %r12,%rdi
2010: e8 00 00 00 00 callq 2015 <vxlan_fdb_delete_default+0x45>
2015: 5b pop %rbx
2016: 41 5c pop %r12
2018: 5d pop %rbp
2019: c3 retq
201a: 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1)
0000000000002020 <vxlan_uninit>:
struct vxlan_fdb *f;
spin_lock_bh(&vxlan->hash_lock);
f = __vxlan_find_mac(vxlan, all_zeros_mac);
2020: e8 00 00 00 00 callq 2025 <vxlan_uninit+0x5>
2025: 55 push %rbp
2026: 48 89 e5 mov %rsp,%rbp
2029: 53 push %rbx
202a: 48 89 fb mov %rdi,%rbx
if (f)
202d: 48 8d bf 40 08 00 00 lea 0x840(%rdi),%rdi
vxlan_fdb_destroy(vxlan, f);
2034: e8 97 ff ff ff callq 1fd0 <vxlan_fdb_delete_default>
2039: 48 8b bb 88 04 00 00 mov 0x488(%rbx),%rdi
raw_spin_unlock(&lock->rlock);
}
static __always_inline void spin_unlock_bh(spinlock_t *lock)
{
raw_spin_unlock_bh(&lock->rlock);
2040: e8 00 00 00 00 callq 2045 <vxlan_uninit+0x25>
spin_unlock_bh(&vxlan->hash_lock);
}
2045: 5b pop %rbx
2046: 5d pop %rbp
2047: c3 retq
2048: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
204f: 00
0000000000002050 <vxlan_ip_miss>:
static void vxlan_uninit(struct net_device *dev)
{
2050: e8 00 00 00 00 callq 2055 <vxlan_ip_miss+0x5>
2055: 55 push %rbp
2056: 48 89 fa mov %rdi,%rdx
2059: b9 0a 00 00 00 mov $0xa,%ecx
struct vxlan_dev *vxlan = netdev_priv(dev);
vxlan_fdb_delete_default(vxlan);
205e: 48 89 e5 mov %rsp,%rbp
2061: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
2065: 48 81 ec b0 00 00 00 sub $0xb0,%rsp
free_percpu(dev->tstats);
206c: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
2073: 00 00
}
2075: 48 89 84 24 a8 00 00 mov %rax,0xa8(%rsp)
207c: 00
207d: 31 c0 xor %eax,%eax
207f: 48 89 e7 mov %rsp,%rdi
if (err < 0)
rtnl_set_sk_err(net, RTNLGRP_NEIGH, err);
}
static void vxlan_ip_miss(struct net_device *dev, union vxlan_addr *ipa)
{
2082: f3 48 ab rep stos %rax,%es:(%rdi)
2085: b9 04 00 00 00 mov $0x4,%ecx
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_fdb f = {
208a: 48 8d 7c 24 50 lea 0x50(%rsp),%rdi
if (err < 0)
rtnl_set_sk_err(net, RTNLGRP_NEIGH, err);
}
static void vxlan_ip_miss(struct net_device *dev, union vxlan_addr *ipa)
{
208f: 66 89 4c 24 46 mov %cx,0x46(%rsp)
2094: b9 0b 00 00 00 mov $0xb,%ecx
2099: f3 48 ab rep stos %rax,%es:(%rdi)
209c: 48 8b 06 mov (%rsi),%rax
209f: 48 8d ba 40 08 00 00 lea 0x840(%rdx),%rdi
20a6: 48 8d 54 24 50 lea 0x50(%rsp),%rdx
20ab: b9 1e 00 00 00 mov $0x1e,%ecx
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_fdb f = {
20b0: c7 44 24 70 01 00 00 movl $0x1,0x70(%rsp)
20b7: 00
20b8: 48 89 44 24 50 mov %rax,0x50(%rsp)
.state = NUD_STALE,
};
struct vxlan_rdst remote = {
20bd: 48 8b 46 08 mov 0x8(%rsi),%rax
}
static void vxlan_ip_miss(struct net_device *dev, union vxlan_addr *ipa)
{
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_fdb f = {
20c1: 48 89 44 24 58 mov %rax,0x58(%rsp)
.state = NUD_STALE,
};
struct vxlan_rdst remote = {
20c6: 48 8b 46 10 mov 0x10(%rsi),%rax
20ca: 48 89 44 24 60 mov %rax,0x60(%rsp)
.remote_ip = *ipa, /* goes to NDA_DST */
.remote_vni = cpu_to_be32(VXLAN_N_VID),
};
vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH);
20cf: 8b 46 18 mov 0x18(%rsi),%eax
20d2: 48 89 e6 mov %rsp,%rsi
20d5: 89 44 24 68 mov %eax,0x68(%rsp)
20d9: e8 62 fc ff ff callq 1d40 <vxlan_fdb_notify>
20de: 48 8b 84 24 a8 00 00 mov 0xa8(%rsp),%rax
20e5: 00
{
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_fdb f = {
.state = NUD_STALE,
};
struct vxlan_rdst remote = {
20e6: 65 48 33 04 25 28 00 xor %gs:0x28,%rax
20ed: 00 00
20ef: 75 02 jne 20f3 <vxlan_ip_miss+0xa3>
20f1: c9 leaveq
20f2: c3 retq
20f3: e8 00 00 00 00 callq 20f8 <vxlan_ip_miss+0xa8>
20f8: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
20ff: 00
0000000000002100 <vxlan_fdb_miss>:
2100: e8 00 00 00 00 callq 2105 <vxlan_fdb_miss+0x5>
2105: 55 push %rbp
2106: 49 89 f8 mov %rdi,%r8
.remote_ip = *ipa, /* goes to NDA_DST */
.remote_vni = cpu_to_be32(VXLAN_N_VID),
};
vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH);
2109: b9 0a 00 00 00 mov $0xa,%ecx
}
210e: ba 04 00 00 00 mov $0x4,%edx
2113: 48 89 e5 mov %rsp,%rbp
2116: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
211a: 48 81 ec b0 00 00 00 sub $0xb0,%rsp
2121: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
2128: 00 00
212a: 48 89 84 24 a8 00 00 mov %rax,0xa8(%rsp)
2131: 00
static void vxlan_fdb_miss(struct vxlan_dev *vxlan, const u8 eth_addr[ETH_ALEN])
{
2132: 31 c0 xor %eax,%eax
2134: 48 89 e7 mov %rsp,%rdi
2137: f3 48 ab rep stos %rax,%es:(%rdi)
struct vxlan_fdb f = {
213a: 48 8d 7c 24 50 lea 0x50(%rsp),%rdi
213f: b9 0b 00 00 00 mov $0xb,%ecx
vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH);
}
static void vxlan_fdb_miss(struct vxlan_dev *vxlan, const u8 eth_addr[ETH_ALEN])
{
2144: 66 89 54 24 46 mov %dx,0x46(%rsp)
2149: 48 8d 54 24 50 lea 0x50(%rsp),%rdx
214e: f3 48 ab rep stos %rax,%es:(%rdi)
2151: 8b 06 mov (%rsi),%eax
2153: b9 1e 00 00 00 mov $0x1e,%ecx
2158: 4c 89 c7 mov %r8,%rdi
215b: 89 44 24 40 mov %eax,0x40(%rsp)
215f: 0f b7 46 04 movzwl 0x4(%rsi),%eax
2163: 48 89 e6 mov %rsp,%rsi
struct vxlan_fdb f = {
2166: 66 89 44 24 44 mov %ax,0x44(%rsp)
.state = NUD_STALE,
};
struct vxlan_rdst remote = { };
216b: e8 d0 fb ff ff callq 1d40 <vxlan_fdb_notify>
2170: 48 8b 84 24 a8 00 00 mov 0xa8(%rsp),%rax
2177: 00
vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH);
}
static void vxlan_fdb_miss(struct vxlan_dev *vxlan, const u8 eth_addr[ETH_ALEN])
{
struct vxlan_fdb f = {
2178: 65 48 33 04 25 28 00 xor %gs:0x28,%rax
217f: 00 00
.state = NUD_STALE,
};
struct vxlan_rdst remote = { };
memcpy(f.eth_addr, eth_addr, ETH_ALEN);
2181: 75 02 jne 2185 <vxlan_fdb_miss+0x85>
vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH);
2183: c9 leaveq
2184: c3 retq
2185: e8 00 00 00 00 callq 218a <vxlan_fdb_miss+0x8a>
218a: 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1)
0000000000002190 <vxlan_fdb_dump>:
struct vxlan_fdb f = {
.state = NUD_STALE,
};
struct vxlan_rdst remote = { };
memcpy(f.eth_addr, eth_addr, ETH_ALEN);
2190: e8 00 00 00 00 callq 2195 <vxlan_fdb_dump+0x5>
vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH);
2195: 55 push %rbp
struct vxlan_fdb f = {
.state = NUD_STALE,
};
struct vxlan_rdst remote = { };
memcpy(f.eth_addr, eth_addr, ETH_ALEN);
2196: 48 8d 82 40 08 00 00 lea 0x840(%rdx),%rax
vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH);
219d: 48 89 e5 mov %rsp,%rbp
}
21a0: 41 57 push %r15
21a2: 41 56 push %r14
21a4: 41 55 push %r13
21a6: 41 54 push %r12
21a8: 49 89 f6 mov %rsi,%r14
21ab: 53 push %rbx
21ac: 44 89 c3 mov %r8d,%ebx
21af: 48 83 ec 30 sub $0x30,%rsp
21b3: 48 89 45 d0 mov %rax,-0x30(%rbp)
21b7: 48 8d 82 a0 09 00 00 lea 0x9a0(%rdx),%rax
21be: 48 89 7d c8 mov %rdi,-0x38(%rbp)
/* Dump forwarding table */
static int vxlan_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb,
struct net_device *dev,
struct net_device *filter_dev, int idx)
{
21c2: 48 89 45 c0 mov %rax,-0x40(%rbp)
21c6: 48 8d 82 a0 11 00 00 lea 0x11a0(%rdx),%rax
21cd: 48 89 45 b8 mov %rax,-0x48(%rbp)
21d1: 48 8b 45 c0 mov -0x40(%rbp),%rax
21d5: 4c 8b 28 mov (%rax),%r13
21d8: 4d 85 ed test %r13,%r13
21db: 0f 84 91 00 00 00 je 2272 <vxlan_fdb_dump+0xe2>
21e1: 49 8b 45 30 mov 0x30(%r13),%rax
21e5: 4d 8d 7d 30 lea 0x30(%r13),%r15
21e9: 49 39 c7 cmp %rax,%r15
21ec: 4c 8d 60 d8 lea -0x28(%rax),%r12
21f0: 74 73 je 2265 <vxlan_fdb_dump+0xd5>
21f2: 4c 89 f0 mov %r14,%rax
21f5: 4d 89 fe mov %r15,%r14
21f8: 49 89 c7 mov %rax,%r15
21fb: eb 11 jmp 220e <vxlan_fdb_dump+0x7e>
21fd: 49 8b 44 24 28 mov 0x28(%r12),%rax
})
static __always_inline
void __read_once_size(const volatile void *p, void *res, int size)
{
__READ_ONCE_SIZE;
2202: 83 c3 01 add $0x1,%ebx
2205: 49 39 c6 cmp %rax,%r14
for (h = 0; h < FDB_HASH_SIZE; ++h) {
struct vxlan_fdb *f;
int err;
hlist_for_each_entry_rcu(f, &vxlan->fdb_head[h], hlist) {
2208: 4c 8d 60 d8 lea -0x28(%rax),%r12
220c: 74 54 je 2262 <vxlan_fdb_dump+0xd2>
220e: 48 63 c3 movslq %ebx,%rax
2211: 49 3b 47 48 cmp 0x48(%r15),%rax
struct vxlan_rdst *rd;
list_for_each_entry_rcu(rd, &f->remotes, list) {
2215: 7c e6 jl 21fd <vxlan_fdb_dump+0x6d>
2217: 49 8b 47 08 mov 0x8(%r15),%rax
221b: 48 8b 75 d0 mov -0x30(%rbp),%rsi
221f: 41 b9 1c 00 00 00 mov $0x1c,%r9d
2225: 48 8b 7d c8 mov -0x38(%rbp),%rdi
2229: 4c 89 ea mov %r13,%rdx
222c: 44 8b 40 08 mov 0x8(%rax),%r8d
2230: 49 8b 07 mov (%r15),%rax
if (err < 0) {
cb->args[1] = err;
goto out;
}
skip:
++idx;
2233: 8b 48 34 mov 0x34(%rax),%ecx
int err;
hlist_for_each_entry_rcu(f, &vxlan->fdb_head[h], hlist) {
struct vxlan_rdst *rd;
list_for_each_entry_rcu(rd, &f->remotes, list) {
2236: 4c 89 64 24 08 mov %r12,0x8(%rsp)
223b: c7 04 24 02 00 00 00 movl $0x2,(%rsp)
if (idx < cb->args[0])
2242: e8 a9 f7 ff ff callq 19f0 <vxlan_fdb_info>
goto skip;
err = vxlan_fdb_info(skb, vxlan, f,
2247: 85 c0 test %eax,%eax
2249: 79 b2 jns 21fd <vxlan_fdb_dump+0x6d>
224b: 48 98 cltq
224d: 49 89 47 50 mov %rax,0x50(%r15)
2251: 48 83 c4 30 add $0x30,%rsp
2255: 89 d8 mov %ebx,%eax
2257: 5b pop %rbx
2258: 41 5c pop %r12
225a: 41 5d pop %r13
225c: 41 5e pop %r14
225e: 41 5f pop %r15
2260: 5d pop %rbp
2261: c3 retq
2262: 4d 89 fe mov %r15,%r14
2265: 4d 8b 6d 00 mov 0x0(%r13),%r13
2269: 4d 85 ed test %r13,%r13
226c: 0f 85 6f ff ff ff jne 21e1 <vxlan_fdb_dump+0x51>
2272: 48 83 45 c0 08 addq $0x8,-0x40(%rbp)
NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq,
RTM_NEWNEIGH,
NLM_F_MULTI, rd);
if (err < 0) {
2277: 48 8b 45 c0 mov -0x40(%rbp),%rax
cb->args[1] = err;
227b: 48 39 45 b8 cmp %rax,-0x48(%rbp)
227f: 0f 85 4c ff ff ff jne 21d1 <vxlan_fdb_dump+0x41>
}
}
}
out:
return idx;
}
2285: eb ca jmp 2251 <vxlan_fdb_dump+0xc1>
2287: 66 0f 1f 84 00 00 00 nopw 0x0(%rax,%rax,1)
228e: 00 00
0000000000002290 <vxlan_fdb_find_rdst>:
2290: e8 00 00 00 00 callq 2295 <vxlan_fdb_find_rdst+0x5>
2295: 55 push %rbp
2296: 4c 8b 4f 30 mov 0x30(%rdi),%r9
for (h = 0; h < FDB_HASH_SIZE; ++h) {
struct vxlan_fdb *f;
int err;
hlist_for_each_entry_rcu(f, &vxlan->fdb_head[h], hlist) {
229a: 48 83 c7 30 add $0x30,%rdi
229e: 48 89 e5 mov %rsp,%rbp
22a1: 4c 39 cf cmp %r9,%rdi
22a4: 74 61 je 2307 <vxlan_fdb_find_rdst+0x77>
22a6: 49 8d 41 d8 lea -0x28(%r9),%rax
22aa: 44 0f b7 16 movzwl (%rsi),%r10d
struct net_device *filter_dev, int idx)
{
struct vxlan_dev *vxlan = netdev_priv(dev);
unsigned int h;
for (h = 0; h < FDB_HASH_SIZE; ++h) {
22ae: eb 0d jmp 22bd <vxlan_fdb_find_rdst+0x2d>
22b0: 4c 8b 48 28 mov 0x28(%rax),%r9
22b4: 4c 39 cf cmp %r9,%rdi
22b7: 49 8d 41 d8 lea -0x28(%r9),%rax
22bb: 74 4a je 2307 <vxlan_fdb_find_rdst+0x77>
22bd: 66 44 3b 10 cmp (%rax),%r10w
/* caller should hold vxlan->hash_lock */
static struct vxlan_rdst *vxlan_fdb_find_rdst(struct vxlan_fdb *f,
union vxlan_addr *ip, __be16 port,
__be32 vni, __u32 ifindex)
{
22c1: 75 ed jne 22b0 <vxlan_fdb_find_rdst+0x20>
22c3: 66 41 83 fa 0a cmp $0xa,%r10w
struct vxlan_rdst *rd;
list_for_each_entry(rd, &f->remotes, list) {
22c8: 74 24 je 22ee <vxlan_fdb_find_rdst+0x5e>
22ca: 44 8b 5e 04 mov 0x4(%rsi),%r11d
/* caller should hold vxlan->hash_lock */
static struct vxlan_rdst *vxlan_fdb_find_rdst(struct vxlan_fdb *f,
union vxlan_addr *ip, __be16 port,
__be32 vni, __u32 ifindex)
{
22ce: 44 39 58 04 cmp %r11d,0x4(%rax)
struct vxlan_rdst *rd;
list_for_each_entry(rd, &f->remotes, list) {
22d2: 41 0f 94 c1 sete %r9b
22d6: 45 84 c9 test %r9b,%r9b
22d9: 74 d5 je 22b0 <vxlan_fdb_find_rdst+0x20>
22db: 66 39 50 1c cmp %dx,0x1c(%rax)
22df: 75 cf jne 22b0 <vxlan_fdb_find_rdst+0x20>
22e1: 39 48 20 cmp %ecx,0x20(%rax)
22e4: 75 ca jne 22b0 <vxlan_fdb_find_rdst+0x20>
22e6: 44 39 40 24 cmp %r8d,0x24(%rax)
22ea: 75 c4 jne 22b0 <vxlan_fdb_find_rdst+0x20>
22ec: 5d pop %rbp
#if IS_ENABLED(CONFIG_IPV6)
static inline
bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
{
if (a->sa.sa_family != b->sa.sa_family)
22ed: c3 retq
22ee: 4c 8b 58 08 mov 0x8(%rax),%r11
22f2: 4c 8b 48 10 mov 0x10(%rax),%r9
return false;
if (a->sa.sa_family == AF_INET6)
22f6: 4c 33 5e 08 xor 0x8(%rsi),%r11
return ipv6_addr_equal(&a->sin6.sin6_addr, &b->sin6.sin6_addr);
else
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
22fa: 4c 33 4e 10 xor 0x10(%rsi),%r9
22fe: 4d 09 cb or %r9,%r11
2301: 41 0f 94 c1 sete %r9b
2305: eb cf jmp 22d6 <vxlan_fdb_find_rdst+0x46>
__be32 vni, __u32 ifindex)
{
struct vxlan_rdst *rd;
list_for_each_entry(rd, &f->remotes, list) {
if (vxlan_addr_equal(&rd->remote_ip, ip) &&
2307: 31 c0 xor %eax,%eax
2309: 5d pop %rbp
230a: c3 retq
230b: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
0000000000002310 <vxlan_fdb_delete>:
2310: e8 00 00 00 00 callq 2315 <vxlan_fdb_delete+0x5>
rd->remote_port == port &&
2315: 55 push %rbp
rd->remote_vni == vni &&
2316: 48 89 f7 mov %rsi,%rdi
2319: 48 89 e5 mov %rsp,%rbp
rd->remote_ifindex == ifindex)
return rd;
}
return NULL;
}
231c: 41 57 push %r15
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
const unsigned long *ul1 = (const unsigned long *)a1;
const unsigned long *ul2 = (const unsigned long *)a2;
return ((ul1[0] ^ ul2[0]) | (ul1[1] ^ ul2[1])) == 0UL;
231e: 41 56 push %r14
2320: 41 55 push %r13
2322: 41 54 push %r12
2324: 4c 8d b2 40 08 00 00 lea 0x840(%rdx),%r14
232b: 53 push %rbx
232c: 49 89 d4 mov %rdx,%r12
232f: 49 89 cf mov %rcx,%r15
2332: 4c 89 f6 mov %r14,%rsi
2335: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
2339: 48 83 ec 40 sub $0x40,%rsp
233d: 4c 8d 4c 24 0c lea 0xc(%rsp),%r9
/* Delete entry (via netlink) */
static int vxlan_fdb_delete(struct ndmsg *ndm, struct nlattr *tb[],
struct net_device *dev,
const unsigned char *addr, u16 vid)
{
2342: 4c 8d 44 24 08 lea 0x8(%rsp),%r8
2347: 48 8d 4c 24 06 lea 0x6(%rsp),%rcx
234c: 48 8d 54 24 10 lea 0x10(%rsp),%rdx
2351: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
2358: 00 00
235a: 48 89 44 24 38 mov %rax,0x38(%rsp)
235f: 31 c0 xor %eax,%eax
2361: e8 ba e2 ff ff callq 620 <vxlan_fdb_parse>
2366: 85 c0 test %eax,%eax
2368: 89 c3 mov %eax,%ebx
236a: 74 25 je 2391 <vxlan_fdb_delete+0x81>
236c: 89 d8 mov %ebx,%eax
__be16 port;
__be32 vni;
u32 ifindex;
int err;
err = vxlan_fdb_parse(tb, vxlan, &ip, &port, &vni, &ifindex);
236e: 48 8b 5c 24 38 mov 0x38(%rsp),%rbx
2373: 65 48 33 1c 25 28 00 xor %gs:0x28,%rbx
237a: 00 00
237c: 0f 85 10 01 00 00 jne 2492 <vxlan_fdb_delete+0x182>
/* Delete entry (via netlink) */
static int vxlan_fdb_delete(struct ndmsg *ndm, struct nlattr *tb[],
struct net_device *dev,
const unsigned char *addr, u16 vid)
{
2382: 48 8d 65 d8 lea -0x28(%rbp),%rsp
2386: 5b pop %rbx
2387: 41 5c pop %r12
2389: 41 5d pop %r13
238b: 41 5e pop %r14
238d: 41 5f pop %r15
238f: 5d pop %rbp
2390: c3 retq
__be16 port;
__be32 vni;
u32 ifindex;
int err;
err = vxlan_fdb_parse(tb, vxlan, &ip, &port, &vni, &ifindex);
2391: 4d 8d ac 24 28 09 00 lea 0x928(%r12),%r13
2398: 00
2399: 4c 89 ef mov %r13,%rdi
out:
spin_unlock_bh(&vxlan->hash_lock);
return err;
}
239c: e8 00 00 00 00 callq 23a1 <vxlan_fdb_delete+0x91>
23a1: 4c 89 fe mov %r15,%rsi
23a4: 4c 89 f7 mov %r14,%rdi
23a7: e8 54 dc ff ff callq 0 <__vxlan_find_mac>
23ac: 48 85 c0 test %rax,%rax
23af: 49 89 c4 mov %rax,%r12
23b2: 0f 84 d0 00 00 00 je 2488 <vxlan_fdb_delete+0x178>
23b8: 66 83 7c 24 10 0a cmpw $0xa,0x10(%rsp)
23be: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 23c5 <vxlan_fdb_delete+0xb5>
raw_spin_lock(&lock->rlock);
}
static __always_inline void spin_lock_bh(spinlock_t *lock)
{
raw_spin_lock_bh(&lock->rlock);
23c5: 49 89 44 24 28 mov %rax,0x28(%r12)
23ca: 0f 84 a6 00 00 00 je 2476 <vxlan_fdb_delete+0x166>
23d0: 8b 44 24 14 mov 0x14(%rsp),%eax
static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
23d4: 85 c0 test %eax,%eax
23d6: 0f 94 c0 sete %al
23d9: 84 c0 test %al,%al
23db: 74 18 je 23f5 <vxlan_fdb_delete+0xe5>
if (f)
23dd: 4c 89 e6 mov %r12,%rsi
static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
23e0: 4c 89 f7 mov %r14,%rdi
if (f)
23e3: e8 48 fa ff ff callq 1e30 <vxlan_fdb_destroy>
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
static inline bool vxlan_addr_any(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
23e8: 4c 89 ef mov %r13,%rdi
23eb: e8 00 00 00 00 callq 23f0 <vxlan_fdb_delete+0xe0>
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
if (f)
f->used = jiffies;
23f0: e9 77 ff ff ff jmpq 236c <vxlan_fdb_delete+0x5c>
23f5: 0f b7 54 24 06 movzwl 0x6(%rsp),%edx
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
static inline bool vxlan_addr_any(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
23fa: 44 8b 44 24 0c mov 0xc(%rsp),%r8d
23ff: 48 8d 74 24 10 lea 0x10(%rsp),%rsi
return ipv6_addr_any(&ipa->sin6.sin6_addr);
else
return ipa->sin.sin_addr.s_addr == htonl(INADDR_ANY);
2404: 8b 4c 24 08 mov 0x8(%rsp),%ecx
2408: 4c 89 e7 mov %r12,%rdi
spin_lock_bh(&vxlan->hash_lock);
f = vxlan_find_mac(vxlan, addr);
if (!f)
goto out;
if (!vxlan_addr_any(&ip)) {
240b: e8 80 fe ff ff callq 2290 <vxlan_fdb_find_rdst>
vxlan_fdb_notify(vxlan, f, rd, RTM_DELNEIGH);
kfree_rcu(rd, rcu);
goto out;
}
vxlan_fdb_destroy(vxlan, f);
2410: 48 85 c0 test %rax,%rax
2413: 49 89 c7 mov %rax,%r15
2416: 74 70 je 2488 <vxlan_fdb_delete+0x178>
raw_spin_unlock(&lock->rlock);
}
static __always_inline void spin_unlock_bh(spinlock_t *lock)
{
raw_spin_unlock_bh(&lock->rlock);
2418: 49 8b 44 24 30 mov 0x30(%r12),%rax
241d: 49 8d 54 24 30 lea 0x30(%r12),%rdx
out:
spin_unlock_bh(&vxlan->hash_lock);
return err;
2422: 48 39 c2 cmp %rax,%rdx
f = vxlan_find_mac(vxlan, addr);
if (!f)
goto out;
if (!vxlan_addr_any(&ip)) {
rd = vxlan_fdb_find_rdst(f, &ip, port, vni, ifindex);
2425: 74 0c je 2433 <vxlan_fdb_delete+0x123>
2427: 49 8b 44 24 38 mov 0x38(%r12),%rax
242c: 49 39 44 24 30 cmp %rax,0x30(%r12)
2431: 74 aa je 23dd <vxlan_fdb_delete+0xcd>
2433: 49 8b 47 30 mov 0x30(%r15),%rax
2437: 49 8b 57 28 mov 0x28(%r15),%rdx
243b: 4c 89 e6 mov %r12,%rsi
243e: 4c 89 f7 mov %r14,%rdi
if (!rd)
2441: b9 1d 00 00 00 mov $0x1d,%ecx
2446: 48 89 42 08 mov %rax,0x8(%rdx)
244a: 48 89 10 mov %rdx,(%rax)
err = 0;
/* remove a destination if it's not the only one on the list,
* otherwise destroy the fdb entry
*/
if (rd && !list_is_singular(&f->remotes)) {
244d: 48 b8 00 02 00 00 00 movabs $0xdead000000000200,%rax
2454: 00 ad de
* list_is_singular - tests whether a list has just one entry.
* @head: the list to test.
*/
static inline int list_is_singular(const struct list_head *head)
{
return !list_empty(head) && (head->next == head->prev);
2457: 49 89 47 30 mov %rax,0x30(%r15)
245b: 4c 89 fa mov %r15,%rdx
245e: e8 dd f8 ff ff callq 1d40 <vxlan_fdb_notify>
* in an undefined state.
*/
#ifndef CONFIG_DEBUG_LIST
static inline void __list_del_entry(struct list_head *entry)
{
__list_del(entry->prev, entry->next);
2463: 49 8d 7f 38 lea 0x38(%r15),%rdi
2467: be 38 00 00 00 mov $0x38,%esi
list_del_rcu(&rd->list);
vxlan_fdb_notify(vxlan, f, rd, RTM_DELNEIGH);
246c: e8 00 00 00 00 callq 2471 <vxlan_fdb_delete+0x161>
2471: e9 72 ff ff ff jmpq 23e8 <vxlan_fdb_delete+0xd8>
* This is only for internal list manipulation where we know
* the prev/next entries already!
*/
static inline void __list_del(struct list_head * prev, struct list_head * next)
{
next->prev = prev;
2476: 48 8b 44 24 18 mov 0x18(%rsp),%rax
{
switch (size) {
case 1: *(volatile __u8 *)p = *(__u8 *)res; break;
case 2: *(volatile __u16 *)p = *(__u16 *)res; break;
case 4: *(volatile __u32 *)p = *(__u32 *)res; break;
case 8: *(volatile __u64 *)p = *(__u64 *)res; break;
247b: 48 0b 44 24 20 or 0x20(%rsp),%rax
* grace period has elapsed.
*/
static inline void list_del_rcu(struct list_head *entry)
{
__list_del_entry(entry);
entry->prev = LIST_POISON2;
2480: 0f 94 c0 sete %al
2483: e9 51 ff ff ff jmpq 23d9 <vxlan_fdb_delete+0xc9>
2488: bb fe ff ff ff mov $0xfffffffe,%ebx
248d: e9 56 ff ff ff jmpq 23e8 <vxlan_fdb_delete+0xd8>
2492: e8 00 00 00 00 callq 2497 <vxlan_fdb_delete+0x187>
kfree_rcu(rd, rcu);
2497: 66 0f 1f 84 00 00 00 nopw 0x0(%rax,%rax,1)
249e: 00 00
00000000000024a0 <vxlan_fdb_append>:
24a0: e8 00 00 00 00 callq 24a5 <vxlan_fdb_append+0x5>
goto out;
24a5: 55 push %rbp
static inline bool ipv6_addr_any(const struct in6_addr *a)
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
const unsigned long *ul = (const unsigned long *)a;
return (ul[0] | ul[1]) == 0UL;
24a6: 48 89 e5 mov %rsp,%rbp
24a9: 41 57 push %r15
24ab: 41 56 push %r14
24ad: 41 55 push %r13
24af: 41 54 push %r12
24b1: 41 89 d6 mov %edx,%r14d
24b4: 53 push %rbx
24b5: 0f b7 d2 movzwl %dx,%edx
err = vxlan_fdb_parse(tb, vxlan, &ip, &port, &vni, &ifindex);
if (err)
return err;
err = -ENOENT;
24b8: 48 89 fb mov %rdi,%rbx
24bb: 49 89 f7 mov %rsi,%r15
24be: 41 89 cd mov %ecx,%r13d
24c1: 45 89 c4 mov %r8d,%r12d
out:
spin_unlock_bh(&vxlan->hash_lock);
return err;
}
24c4: 48 83 ec 10 sub $0x10,%rsp
24c8: 4c 89 4d d0 mov %r9,-0x30(%rbp)
24cc: e8 bf fd ff ff callq 2290 <vxlan_fdb_find_rdst>
/* Add/update destinations for multicast */
static int vxlan_fdb_append(struct vxlan_fdb *f,
union vxlan_addr *ip, __be16 port, __be32 vni,
__u32 ifindex, struct vxlan_rdst **rdp)
{
24d1: 31 d2 xor %edx,%edx
24d3: 48 85 c0 test %rax,%rax
24d6: 74 11 je 24e9 <vxlan_fdb_append+0x49>
24d8: 48 83 c4 10 add $0x10,%rsp
24dc: 89 d0 mov %edx,%eax
24de: 5b pop %rbx
24df: 41 5c pop %r12
24e1: 41 5d pop %r13
24e3: 41 5e pop %r14
struct vxlan_rdst *rd;
rd = vxlan_fdb_find_rdst(f, ip, port, vni, ifindex);
24e5: 41 5f pop %r15
24e7: 5d pop %rbp
/* Add/update destinations for multicast */
static int vxlan_fdb_append(struct vxlan_fdb *f,
union vxlan_addr *ip, __be16 port, __be32 vni,
__u32 ifindex, struct vxlan_rdst **rdp)
{
24e8: c3 retq
24e9: 48 8b 3d 00 00 00 00 mov 0x0(%rip),%rdi # 24f0 <vxlan_fdb_append+0x50>
24f0: ba 58 00 00 00 mov $0x58,%edx
24f5: be 20 00 08 02 mov $0x2080020,%esi
24fa: e8 00 00 00 00 callq 24ff <vxlan_fdb_append+0x5f>
struct vxlan_rdst *rd;
rd = vxlan_fdb_find_rdst(f, ip, port, vni, ifindex);
24ff: 48 85 c0 test %rax,%rax
if (rd)
return 0;
2502: 74 70 je 2574 <vxlan_fdb_append+0xd4>
__u32 ifindex, struct vxlan_rdst **rdp)
{
struct vxlan_rdst *rd;
rd = vxlan_fdb_find_rdst(f, ip, port, vni, ifindex);
if (rd)
2504: 48 8d 78 48 lea 0x48(%rax),%rdi
list_add_tail_rcu(&rd->list, &f->remotes);
*rdp = rd;
return 1;
}
2508: be 20 00 08 02 mov $0x2080020,%esi
250d: 48 89 45 c8 mov %rax,-0x38(%rbp)
2511: e8 00 00 00 00 callq 2516 <vxlan_fdb_append+0x76>
2516: 85 c0 test %eax,%eax
2518: 48 8b 55 c8 mov -0x38(%rbp),%rdx
int index = kmalloc_index(size);
if (!index)
return ZERO_SIZE_PTR;
return kmem_cache_alloc_trace(kmalloc_caches[index],
251c: 75 60 jne 257e <vxlan_fdb_append+0xde>
251e: 49 8b 07 mov (%r15),%rax
2521: 48 8b 4b 38 mov 0x38(%rbx),%rcx
2525: 48 8d 73 30 lea 0x30(%rbx),%rsi
2529: 66 44 89 72 1c mov %r14w,0x1c(%rdx)
252e: 44 89 6a 20 mov %r13d,0x20(%rdx)
rd = vxlan_fdb_find_rdst(f, ip, port, vni, ifindex);
if (rd)
return 0;
rd = kmalloc(sizeof(*rd), GFP_ATOMIC);
if (rd == NULL)
2532: 44 89 62 24 mov %r12d,0x24(%rdx)
return -ENOBUFS;
if (dst_cache_init(&rd->dst_cache, GFP_ATOMIC)) {
2536: 48 89 72 28 mov %rsi,0x28(%rdx)
253a: 48 89 02 mov %rax,(%rdx)
253d: 49 8b 47 08 mov 0x8(%r15),%rax
2541: 48 89 4a 30 mov %rcx,0x30(%rdx)
2545: 48 89 42 08 mov %rax,0x8(%rdx)
2549: 49 8b 47 10 mov 0x10(%r15),%rax
254d: 48 89 42 10 mov %rax,0x10(%rdx)
* list_for_each_entry_rcu().
*/
static inline void list_add_tail_rcu(struct list_head *new,
struct list_head *head)
{
__list_add_rcu(new, head->prev, head);
2551: 41 8b 47 18 mov 0x18(%r15),%eax
rd->remote_ip = *ip;
rd->remote_port = port;
rd->remote_vni = vni;
rd->remote_ifindex = ifindex;
list_add_tail_rcu(&rd->list, &f->remotes);
2555: 89 42 18 mov %eax,0x18(%rdx)
2558: 48 8d 42 28 lea 0x28(%rdx),%rax
kfree(rd);
return -ENOBUFS;
}
rd->remote_ip = *ip;
rd->remote_port = port;
255c: 48 89 01 mov %rax,(%rcx)
rd->remote_vni = vni;
255f: 48 89 43 38 mov %rax,0x38(%rbx)
rd->remote_ifindex = ifindex;
2563: 48 8b 45 d0 mov -0x30(%rbp),%rax
list_add_tail_rcu(&rd->list, &f->remotes);
2567: 48 89 10 mov %rdx,(%rax)
if (dst_cache_init(&rd->dst_cache, GFP_ATOMIC)) {
kfree(rd);
return -ENOBUFS;
}
rd->remote_ip = *ip;
256a: ba 01 00 00 00 mov $0x1,%edx
256f: e9 64 ff ff ff jmpq 24d8 <vxlan_fdb_append+0x38>
#ifndef CONFIG_DEBUG_LIST
static inline void __list_add_rcu(struct list_head *new,
struct list_head *prev, struct list_head *next)
{
new->next = next;
new->prev = prev;
2574: ba 97 ff ff ff mov $0xffffff97,%edx
2579: e9 5a ff ff ff jmpq 24d8 <vxlan_fdb_append+0x38>
257e: 48 89 d7 mov %rdx,%rdi
2581: e8 00 00 00 00 callq 2586 <vxlan_fdb_append+0xe6>
2586: ba 97 ff ff ff mov $0xffffff97,%edx
rd->remote_port = port;
rd->remote_vni = vni;
rd->remote_ifindex = ifindex;
list_add_tail_rcu(&rd->list, &f->remotes);
258b: e9 48 ff ff ff jmpq 24d8 <vxlan_fdb_append+0x38>
0000000000002590 <vxlan_fdb_create>:
rcu_assign_pointer(list_next_rcu(prev), new);
next->prev = new;
2590: e8 00 00 00 00 callq 2595 <vxlan_fdb_create+0x5>
*rdp = rd;
2595: 55 push %rbp
2596: 48 89 e5 mov %rsp,%rbp
2599: 41 57 push %r15
return 1;
259b: 41 56 push %r14
259d: 41 55 push %r13
259f: 41 54 push %r12
25a1: 49 89 f7 mov %rsi,%r15
if (rd)
return 0;
rd = kmalloc(sizeof(*rd), GFP_ATOMIC);
if (rd == NULL)
return -ENOBUFS;
25a4: 53 push %rbx
25a5: 49 89 fc mov %rdi,%r12
25a8: 45 89 c5 mov %r8d,%r13d
25ab: 48 83 ec 20 sub $0x20,%rsp
if (dst_cache_init(&rd->dst_cache, GFP_ATOMIC)) {
kfree(rd);
25af: 48 89 55 b8 mov %rdx,-0x48(%rbp)
25b3: 89 4d c4 mov %ecx,-0x3c(%rbp)
return -ENOBUFS;
25b6: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
25bd: 00 00
25bf: 48 89 45 d0 mov %rax,-0x30(%rbp)
static int vxlan_fdb_create(struct vxlan_dev *vxlan,
const u8 *mac, union vxlan_addr *ip,
__u16 state, __u16 flags,
__be16 port, __be32 vni, __u32 ifindex,
__u8 ndm_flags)
{
25c3: 31 c0 xor %eax,%eax
25c5: 44 89 4d c0 mov %r9d,-0x40(%rbp)
25c9: 44 8b 75 20 mov 0x20(%rbp),%r14d
25cd: 48 c7 45 c8 00 00 00 movq $0x0,-0x38(%rbp)
25d4: 00
25d5: e8 26 da ff ff callq 0 <__vxlan_find_mac>
25da: 48 85 c0 test %rax,%rax
25dd: 0f 84 41 01 00 00 je 2724 <vxlan_fdb_create+0x194>
25e3: 41 f7 c5 00 02 00 00 test $0x200,%r13d
25ea: 0f 85 e6 00 00 00 jne 26d6 <vxlan_fdb_create+0x146>
25f0: 48 89 c3 mov %rax,%rbx
25f3: 45 31 ff xor %r15d,%r15d
25f6: 8b 45 c4 mov -0x3c(%rbp),%eax
25f9: 66 39 43 46 cmp %ax,0x46(%rbx)
struct vxlan_rdst *rd = NULL;
25fd: 74 15 je 2614 <vxlan_fdb_create+0x84>
25ff: 66 89 43 46 mov %ax,0x46(%rbx)
2603: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 260a <vxlan_fdb_create+0x7a>
struct vxlan_fdb *f;
int notify = 0;
f = __vxlan_find_mac(vxlan, mac);
if (f) {
260a: 41 bf 01 00 00 00 mov $0x1,%r15d
2610: 48 89 43 20 mov %rax,0x20(%rbx)
if (flags & NLM_F_EXCL) {
2614: 44 38 73 48 cmp %r14b,0x48(%rbx)
2618: 74 15 je 262f <vxlan_fdb_create+0x9f>
261a: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 2621 <vxlan_fdb_create+0x91>
2621: 44 88 73 48 mov %r14b,0x48(%rbx)
__be16 port, __be32 vni, __u32 ifindex,
__u8 ndm_flags)
{
struct vxlan_rdst *rd = NULL;
struct vxlan_fdb *f;
int notify = 0;
2625: 41 bf 01 00 00 00 mov $0x1,%r15d
if (flags & NLM_F_EXCL) {
netdev_dbg(vxlan->dev,
"lost race to create %pM\n", mac);
return -EEXIST;
}
if (f->state != state) {
262b: 48 89 43 20 mov %rax,0x20(%rbx)
f->state = state;
262f: 41 f7 c5 00 01 00 00 test $0x100,%r13d
f->updated = jiffies;
2636: 74 3c je 2674 <vxlan_fdb_create+0xe4>
2638: 8b 43 40 mov 0x40(%rbx),%eax
notify = 1;
263b: a8 01 test $0x1,%al
263d: 0f 85 0f 02 00 00 jne 2852 <vxlan_fdb_create+0x2c2>
"lost race to create %pM\n", mac);
return -EEXIST;
}
if (f->state != state) {
f->state = state;
f->updated = jiffies;
2643: 0f b7 53 44 movzwl 0x44(%rbx),%edx
notify = 1;
}
if (f->flags != ndm_flags) {
2647: 09 c2 or %eax,%edx
2649: 0f 84 03 02 00 00 je 2852 <vxlan_fdb_create+0x2c2>
f->flags = ndm_flags;
f->updated = jiffies;
264f: 0f b7 55 c0 movzwl -0x40(%rbp),%edx
f->state = state;
f->updated = jiffies;
notify = 1;
}
if (f->flags != ndm_flags) {
f->flags = ndm_flags;
2653: 44 8b 45 18 mov 0x18(%rbp),%r8d
f->updated = jiffies;
notify = 1;
2657: 48 89 df mov %rbx,%rdi
265a: 8b 4d 10 mov 0x10(%rbp),%ecx
f->updated = jiffies;
notify = 1;
}
if (f->flags != ndm_flags) {
f->flags = ndm_flags;
f->updated = jiffies;
265d: 48 8b 75 b8 mov -0x48(%rbp),%rsi
notify = 1;
}
if ((flags & NLM_F_REPLACE)) {
2661: e8 2a fc ff ff callq 2290 <vxlan_fdb_find_rdst>
2666: 31 d2 xor %edx,%edx
* By definition the broadcast address is also a multicast address.
*/
static inline bool is_multicast_ether_addr(const u8 *addr)
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
u32 a = *(const u32 *)addr;
2668: 48 85 c0 test %rax,%rax
/* Only change unicasts */
if (!(is_multicast_ether_addr(f->eth_addr) ||
266b: 0f 84 fc 01 00 00 je 286d <vxlan_fdb_create+0x2dd>
2671: 41 09 d7 or %edx,%r15d
2674: 41 81 e5 00 08 00 00 and $0x800,%r13d
267b: 74 31 je 26ae <vxlan_fdb_create+0x11e>
267d: 8b 43 40 mov 0x40(%rbx),%eax
union vxlan_addr *ip, __be16 port, __be32 vni,
__u32 ifindex)
{
struct vxlan_rdst *rd;
rd = vxlan_fdb_find_rdst(f, ip, port, vni, ifindex);
2680: a8 01 test $0x1,%al
2682: 75 08 jne 268c <vxlan_fdb_create+0xfc>
2684: 0f b7 53 44 movzwl 0x44(%rbx),%edx
2688: 09 c2 or %eax,%edx
268a: 75 22 jne 26ae <vxlan_fdb_create+0x11e>
268c: 0f b7 55 c0 movzwl -0x40(%rbp),%edx
2690: 44 8b 45 18 mov 0x18(%rbp),%r8d
2694: 4c 8d 4d c8 lea -0x38(%rbp),%r9
if (rd)
2698: 8b 4d 10 mov 0x10(%rbp),%ecx
269b: 48 8b 75 b8 mov -0x48(%rbp),%rsi
269f: 48 89 df mov %rbx,%rdi
}
if ((flags & NLM_F_REPLACE)) {
/* Only change unicasts */
if (!(is_multicast_ether_addr(f->eth_addr) ||
is_zero_ether_addr(f->eth_addr))) {
notify |= vxlan_fdb_replace(f, ip, port, vni,
26a2: e8 f9 fd ff ff callq 24a0 <vxlan_fdb_append>
ifindex);
} else
return -EOPNOTSUPP;
}
if ((flags & NLM_F_APPEND) &&
26a7: 85 c0 test %eax,%eax
26a9: 78 35 js 26e0 <vxlan_fdb_create+0x150>
26ab: 41 09 c7 or %eax,%r15d
26ae: 31 c0 xor %eax,%eax
26b0: 45 85 ff test %r15d,%r15d
26b3: 74 2b je 26e0 <vxlan_fdb_create+0x150>
(is_multicast_ether_addr(f->eth_addr) ||
26b5: 48 8b 55 c8 mov -0x38(%rbp),%rdx
26b9: 48 85 d2 test %rdx,%rdx
is_zero_ether_addr(f->eth_addr))) {
int rc = vxlan_fdb_append(f, ip, port, vni, ifindex,
26bc: 0f 84 9a 01 00 00 je 285c <vxlan_fdb_create+0x2cc>
26c2: b9 1c 00 00 00 mov $0x1c,%ecx
26c7: 48 89 de mov %rbx,%rsi
26ca: 4c 89 e7 mov %r12,%rdi
26cd: e8 6e f6 ff ff callq 1d40 <vxlan_fdb_notify>
26d2: 31 c0 xor %eax,%eax
26d4: eb 0a jmp 26e0 <vxlan_fdb_create+0x150>
26d6: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
&rd);
if (rc < 0)
return rc;
notify |= rc;
26db: b8 ef ff ff ff mov $0xffffffef,%eax
++vxlan->addrcnt;
hlist_add_head_rcu(&f->hlist,
vxlan_fdb_head(vxlan, mac));
}
if (notify) {
26e0: 48 8b 4d d0 mov -0x30(%rbp),%rcx
26e4: 65 48 33 0c 25 28 00 xor %gs:0x28,%rcx
26eb: 00 00
if (rd == NULL)
26ed: 0f 85 01 02 00 00 jne 28f4 <vxlan_fdb_create+0x364>
rd = first_remote_rtnl(f);
vxlan_fdb_notify(vxlan, f, rd, RTM_NEWNEIGH);
26f3: 48 83 c4 20 add $0x20,%rsp
26f7: 5b pop %rbx
26f8: 41 5c pop %r12
26fa: 41 5d pop %r13
26fc: 41 5e pop %r14
26fe: 41 5f pop %r15
2700: 5d pop %rbp
2701: c3 retq
}
return 0;
2702: 49 8b 74 24 30 mov 0x30(%r12),%rsi
2707: 4c 89 f9 mov %r15,%rcx
270a: 48 c7 c2 00 00 00 00 mov $0x0,%rdx
}
2711: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
2718: e8 00 00 00 00 callq 271d <vxlan_fdb_create+0x18d>
271d: b8 ef ff ff ff mov $0xffffffef,%eax
2722: eb bc jmp 26e0 <vxlan_fdb_create+0x150>
2724: 41 f7 c5 00 04 00 00 test $0x400,%r13d
272b: 0f 84 a5 01 00 00 je 28d6 <vxlan_fdb_create+0x346>
2731: 41 8b 84 24 58 01 00 mov 0x158(%r12),%eax
2738: 00
int notify = 0;
f = __vxlan_find_mac(vxlan, mac);
if (f) {
if (flags & NLM_F_EXCL) {
netdev_dbg(vxlan->dev,
2739: 85 c0 test %eax,%eax
273b: 74 0e je 274b <vxlan_fdb_create+0x1bb>
273d: 41 3b 84 24 ec 00 00 cmp 0xec(%r12),%eax
2744: 00
2745: 0f 86 95 01 00 00 jbe 28e0 <vxlan_fdb_create+0x350>
274b: 41 81 e5 00 01 00 00 and $0x100,%r13d
"lost race to create %pM\n", mac);
return -EEXIST;
2752: 74 18 je 276c <vxlan_fdb_create+0x1dc>
if (rc < 0)
return rc;
notify |= rc;
}
} else {
if (!(flags & NLM_F_CREATE))
2754: 41 8b 07 mov (%r15),%eax
2757: a8 01 test $0x1,%al
2759: 0f 85 f3 00 00 00 jne 2852 <vxlan_fdb_create+0x2c2>
275f: 41 0f b7 57 04 movzwl 0x4(%r15),%edx
return -ENOENT;
if (vxlan->cfg.addrmax &&
2764: 09 c2 or %eax,%edx
2766: 0f 84 e6 00 00 00 je 2852 <vxlan_fdb_create+0x2c2>
276c: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
2771: 48 8b 3d 00 00 00 00 mov 0x0(%rip),%rdi # 2778 <vxlan_fdb_create+0x1e8>
2778: ba 50 00 00 00 mov $0x50,%edx
vxlan->addrcnt >= vxlan->cfg.addrmax)
return -ENOSPC;
/* Disallow replace to add a multicast entry */
if ((flags & NLM_F_REPLACE) &&
277d: be 20 00 08 02 mov $0x2080020,%esi
2782: e8 00 00 00 00 callq 2787 <vxlan_fdb_create+0x1f7>
2787: 48 85 c0 test %rax,%rax
278a: 48 89 c3 mov %rax,%rbx
278d: 0f 84 57 01 00 00 je 28ea <vxlan_fdb_create+0x35a>
(is_multicast_ether_addr(mac) || is_zero_ether_addr(mac)))
2793: 0f b7 45 c4 movzwl -0x3c(%rbp),%eax
2797: 0f b7 55 c0 movzwl -0x40(%rbp),%edx
279b: 4c 8d 4d c8 lea -0x38(%rbp),%r9
279f: 44 8b 45 18 mov 0x18(%rbp),%r8d
27a3: 8b 4d 10 mov 0x10(%rbp),%ecx
27a6: 48 89 df mov %rbx,%rdi
27a9: 48 8b 75 b8 mov -0x48(%rbp),%rsi
27ad: 44 88 73 48 mov %r14b,0x48(%rbx)
27b1: 66 89 43 46 mov %ax,0x46(%rbx)
27b5: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 27bc <vxlan_fdb_create+0x22c>
27bc: 48 89 43 28 mov %rax,0x28(%rbx)
return -EOPNOTSUPP;
netdev_dbg(vxlan->dev, "add %pM -> %pIS\n", mac, ip);
f = kmalloc(sizeof(*f), GFP_ATOMIC);
if (!f)
27c0: 48 89 43 20 mov %rax,0x20(%rbx)
return -ENOMEM;
notify = 1;
f->state = state;
27c4: 48 8d 43 30 lea 0x30(%rbx),%rax
f->flags = ndm_flags;
f->updated = f->used = jiffies;
INIT_LIST_HEAD(&f->remotes);
memcpy(f->eth_addr, mac, ETH_ALEN);
vxlan_fdb_append(f, ip, port, vni, ifindex, &rd);
27c8: 48 89 43 30 mov %rax,0x30(%rbx)
27cc: 48 89 43 38 mov %rax,0x38(%rbx)
27d0: 41 8b 07 mov (%r15),%eax
27d3: 89 43 40 mov %eax,0x40(%rbx)
27d6: 41 0f b7 47 04 movzwl 0x4(%r15),%eax
27db: 66 89 43 44 mov %ax,0x44(%rbx)
if (!f)
return -ENOMEM;
notify = 1;
f->state = state;
f->flags = ndm_flags;
27df: e8 bc fc ff ff callq 24a0 <vxlan_fdb_append>
f = kmalloc(sizeof(*f), GFP_ATOMIC);
if (!f)
return -ENOMEM;
notify = 1;
f->state = state;
27e4: 41 83 84 24 ec 00 00 addl $0x1,0xec(%r12)
27eb: 00 01
f->flags = ndm_flags;
f->updated = f->used = jiffies;
27ed: 48 ba eb 83 b5 80 46 movabs $0x61c8864680b583eb,%rdx
27f4: 86 c8 61
INIT_LIST_HEAD(&f->remotes);
27f7: 49 8b 07 mov (%r15),%rax
27fa: 48 c1 e0 10 shl $0x10,%rax
struct list_head name = LIST_HEAD_INIT(name)
static inline void INIT_LIST_HEAD(struct list_head *list)
{
WRITE_ONCE(list->next, list);
list->prev = list;
27fe: 48 0f af c2 imul %rdx,%rax
memcpy(f->eth_addr, mac, ETH_ALEN);
2802: 48 c1 e8 38 shr $0x38,%rax
2806: 48 83 c0 2c add $0x2c,%rax
280a: 49 8d 14 c4 lea (%r12,%rax,8),%rdx
280e: 49 8b 04 c4 mov (%r12,%rax,8),%rax
vxlan_fdb_append(f, ip, port, vni, ifindex, &rd);
2812: 48 89 53 08 mov %rdx,0x8(%rbx)
++vxlan->addrcnt;
2816: 48 89 03 mov %rax,(%rbx)
2819: 48 85 c0 test %rax,%rax
281c: 48 89 1a mov %rbx,(%rdx)
#endif
static __always_inline u32 hash_64_generic(u64 val, unsigned int bits)
{
#if BITS_PER_LONG == 64
/* 64x64-bit multiply is efficient on all 64-bit processors */
return val * GOLDEN_RATIO_64 >> (64 - bits);
281f: 0f 84 90 fe ff ff je 26b5 <vxlan_fdb_create+0x125>
2825: 48 89 58 08 mov %rbx,0x8(%rax)
2829: e9 87 fe ff ff jmpq 26b5 <vxlan_fdb_create+0x125>
282e: 49 8b 74 24 30 mov 0x30(%r12),%rsi
/* Hash chain to use given mac address */
static inline struct hlist_head *vxlan_fdb_head(struct vxlan_dev *vxlan,
const u8 *mac)
{
return &vxlan->fdb_head[eth_hash(mac)];
2833: 4c 8b 45 b8 mov -0x48(%rbp),%r8
2837: 4c 89 f9 mov %r15,%rcx
283a: 48 c7 c2 00 00 00 00 mov $0x0,%rdx
* list-traversal primitive must be guarded by rcu_read_lock().
*/
static inline void hlist_add_head_rcu(struct hlist_node *n,
struct hlist_head *h)
{
struct hlist_node *first = h->first;
2841: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
n->next = first;
2848: e8 00 00 00 00 callq 284d <vxlan_fdb_create+0x2bd>
284d: e9 1f ff ff ff jmpq 2771 <vxlan_fdb_create+0x1e1>
n->pprev = &h->first;
rcu_assign_pointer(hlist_first_rcu(h), n);
if (first)
2852: b8 a1 ff ff ff mov $0xffffffa1,%eax
first->pprev = &n->next;
2857: e9 84 fe ff ff jmpq 26e0 <vxlan_fdb_create+0x150>
285c: 48 8b 43 30 mov 0x30(%rbx),%rax
/* Disallow replace to add a multicast entry */
if ((flags & NLM_F_REPLACE) &&
(is_multicast_ether_addr(mac) || is_zero_ether_addr(mac)))
return -EOPNOTSUPP;
netdev_dbg(vxlan->dev, "add %pM -> %pIS\n", mac, ip);
2860: 48 8d 50 d8 lea -0x28(%rax),%rdx
2864: 48 89 55 c8 mov %rdx,-0x38(%rbp)
2868: e9 55 fe ff ff jmpq 26c2 <vxlan_fdb_create+0x132>
286d: 48 8b 43 30 mov 0x30(%rbx),%rax
2871: 48 8d 4b 30 lea 0x30(%rbx),%rcx
2875: 48 39 c1 cmp %rax,%rcx
2878: 0f 84 f3 fd ff ff je 2671 <vxlan_fdb_create+0xe1>
287e: 48 8b 43 30 mov 0x30(%rbx),%rax
if (!(is_multicast_ether_addr(f->eth_addr) ||
is_zero_ether_addr(f->eth_addr))) {
notify |= vxlan_fdb_replace(f, ip, port, vni,
ifindex);
} else
return -EOPNOTSUPP;
2882: 48 83 f8 28 cmp $0x28,%rax
2886: 0f 84 e5 fd ff ff je 2671 <vxlan_fdb_create+0xe1>
return list_entry_rcu(fdb->remotes.next, struct vxlan_rdst, list);
}
static inline struct vxlan_rdst *first_remote_rtnl(struct vxlan_fdb *fdb)
{
return list_first_entry(&fdb->remotes, struct vxlan_rdst, list);
288c: 48 8b 15 00 00 00 00 mov 0x0(%rip),%rdx # 2893 <vxlan_fdb_create+0x303>
2893: 48 8b 7d b8 mov -0x48(%rbp),%rdi
vxlan_fdb_head(vxlan, mac));
}
if (notify) {
if (rd == NULL)
rd = first_remote_rtnl(f);
2897: 0f b7 75 c0 movzwl -0x40(%rbp),%esi
289b: 48 89 50 28 mov %rdx,0x28(%rax)
})
static __always_inline
void __read_once_size(const volatile void *p, void *res, int size)
{
__READ_ONCE_SIZE;
289f: 48 8b 17 mov (%rdi),%rdx
rd = vxlan_fdb_find_rdst(f, ip, port, vni, ifindex);
if (rd)
return 0;
rd = list_first_entry_or_null(&f->remotes, struct vxlan_rdst, list);
28a2: 48 89 50 d8 mov %rdx,-0x28(%rax)
28a6: 48 8b 57 08 mov 0x8(%rdi),%rdx
28aa: 48 89 50 e0 mov %rdx,-0x20(%rax)
28ae: 48 8b 57 10 mov 0x10(%rdi),%rdx
if (!rd)
28b2: 48 89 50 e8 mov %rdx,-0x18(%rax)
28b6: 8b 57 18 mov 0x18(%rdi),%edx
28b9: 66 89 70 f4 mov %si,-0xc(%rax)
* This do not free the cached dst to avoid races and contentions.
* the dst will be freed on later cache lookup.
*/
static inline void dst_cache_reset(struct dst_cache *dst_cache)
{
dst_cache->reset_ts = jiffies;
28bd: 8b 7d 10 mov 0x10(%rbp),%edi
28c0: 8b 75 18 mov 0x18(%rbp),%esi
return 0;
dst_cache_reset(&rd->dst_cache);
rd->remote_ip = *ip;
28c3: 89 50 f0 mov %edx,-0x10(%rax)
28c6: ba 01 00 00 00 mov $0x1,%edx
28cb: 89 78 f8 mov %edi,-0x8(%rax)
28ce: 89 70 fc mov %esi,-0x4(%rax)
28d1: e9 9b fd ff ff jmpq 2671 <vxlan_fdb_create+0xe1>
28d6: b8 fe ff ff ff mov $0xfffffffe,%eax
28db: e9 00 fe ff ff jmpq 26e0 <vxlan_fdb_create+0x150>
28e0: b8 e4 ff ff ff mov $0xffffffe4,%eax
28e5: e9 f6 fd ff ff jmpq 26e0 <vxlan_fdb_create+0x150>
rd->remote_port = port;
28ea: b8 f4 ff ff ff mov $0xfffffff4,%eax
rd->remote_vni = vni;
28ef: e9 ec fd ff ff jmpq 26e0 <vxlan_fdb_create+0x150>
rd = list_first_entry_or_null(&f->remotes, struct vxlan_rdst, list);
if (!rd)
return 0;
dst_cache_reset(&rd->dst_cache);
rd->remote_ip = *ip;
28f4: e8 00 00 00 00 callq 28f9 <vxlan_fdb_create+0x369>
rd->remote_port = port;
rd->remote_vni = vni;
rd->remote_ifindex = ifindex;
return 1;
28f9: 0f 1f 80 00 00 00 00 nopl 0x0(%rax)
0000000000002900 <vxlan_dev_configure>:
dst_cache_reset(&rd->dst_cache);
rd->remote_ip = *ip;
rd->remote_port = port;
rd->remote_vni = vni;
rd->remote_ifindex = ifindex;
2900: e8 00 00 00 00 callq 2905 <vxlan_dev_configure+0x5>
2905: 55 push %rbp
return rc;
notify |= rc;
}
} else {
if (!(flags & NLM_F_CREATE))
return -ENOENT;
2906: 48 89 e5 mov %rsp,%rbp
2909: 41 57 push %r15
290b: 41 56 push %r14
290d: 41 55 push %r13
290f: 41 54 push %r12
if (vxlan->cfg.addrmax &&
vxlan->addrcnt >= vxlan->cfg.addrmax)
return -ENOSPC;
2911: 49 89 d4 mov %rdx,%r12
2914: 53 push %rbx
2915: 48 89 f3 mov %rsi,%rbx
2918: 4c 8d ae 40 08 00 00 lea 0x840(%rsi),%r13
return -EOPNOTSUPP;
netdev_dbg(vxlan->dev, "add %pM -> %pIS\n", mac, ip);
f = kmalloc(sizeof(*f), GFP_ATOMIC);
if (!f)
return -ENOMEM;
291f: 48 83 ec 20 sub $0x20,%rsp
2923: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # 2929 <vxlan_dev_configure+0x29>
rd = first_remote_rtnl(f);
vxlan_fdb_notify(vxlan, f, rd, RTM_NEWNEIGH);
}
return 0;
}
2929: 48 8b 97 88 14 00 00 mov 0x1488(%rdi),%rdx
return ret;
}
static int vxlan_dev_configure(struct net *src_net, struct net_device *dev,
struct vxlan_config *conf)
{
2930: 0f b7 8e 7c 09 00 00 movzwl 0x97c(%rsi),%ecx
2937: 83 e8 01 sub $0x1,%eax
293a: 48 98 cltq
293c: 4c 8b 74 c2 18 mov 0x18(%rdx,%rax,8),%r14
2941: 41 8b 44 24 50 mov 0x50(%r12),%eax
2946: f6 c4 40 test $0x40,%ah
2949: 0f 84 82 02 00 00 je 2bd1 <vxlan_dev_configure+0x2d1>
294f: 25 1f be ff ff and $0xffffbe1f,%eax
2954: 3d 00 20 00 00 cmp $0x2000,%eax
2959: 0f 85 0a 05 00 00 jne 2e69 <vxlan_dev_configure+0x569>
295f: 41 b9 fe ff ff ff mov $0xfffffffe,%r9d
struct vxlan_dev *vxlan = netdev_priv(dev), *tmp;
struct vxlan_rdst *dst = &vxlan->default_dst;
unsigned short needed_headroom = ETH_HLEN;
int err;
bool use_ipv6 = false;
__be16 default_port = vxlan->cfg.dst_port;
2965: 45 31 d2 xor %r10d,%r10d
2968: 48 c7 86 30 02 00 00 movq $0x0,0x230(%rsi)
296f: 00 00 00 00
struct net_device *lowerdev = NULL;
if (conf->flags & VXLAN_F_GPE) {
2973: 66 44 89 8e 4c 02 00 mov %r9w,0x24c(%rsi)
297a: 00
297b: 66 44 89 96 4e 02 00 mov %r10w,0x24e(%rsi)
2982: 00
/* For now, allow GPE only together with COLLECT_METADATA.
* This can be relaxed later; in such case, the other side
* of the PtP link will have to be provided.
*/
if ((conf->flags & ~VXLAN_F_ALLOWED_GPE) ||
2983: c6 86 75 02 00 00 00 movb $0x0,0x275(%rsi)
298a: c7 86 38 02 00 00 90 movl $0x1090,0x238(%rsi)
2991: 10 00 00
}
static void vxlan_raw_setup(struct net_device *dev)
{
dev->header_ops = NULL;
dev->type = ARPHRD_NONE;
2994: 48 c7 86 10 02 00 00 movq $0x0,0x210(%rsi)
299b: 00 00 00 00
dev->netdev_ops = &vxlan_netdev_ether_ops;
}
static void vxlan_raw_setup(struct net_device *dev)
{
dev->header_ops = NULL;
299f: 48 89 bb 78 08 00 00 mov %rdi,0x878(%rbx)
dev->type = ARPHRD_NONE;
29a6: 41 8b 44 24 38 mov 0x38(%r12),%eax
dev->hard_header_len = 0;
29ab: 4c 8d bb 80 08 00 00 lea 0x880(%rbx),%r15
29b2: 89 83 a0 08 00 00 mov %eax,0x8a0(%rbx)
dev->addr_len = 0;
29b8: 49 8b 04 24 mov (%r12),%rax
dev->flags = IFF_POINTOPOINT | IFF_NOARP | IFF_MULTICAST;
29bc: 48 89 83 80 08 00 00 mov %rax,0x880(%rbx)
29c3: 49 8b 44 24 08 mov 0x8(%r12),%rax
dev->netdev_ops = &vxlan_netdev_raw_ops;
29c8: 49 89 47 08 mov %rax,0x8(%r15)
29cc: 49 8b 44 24 10 mov 0x10(%r12),%rax
vxlan_raw_setup(dev);
} else {
vxlan_ether_setup(dev);
}
vxlan->net = src_net;
29d1: 49 89 47 10 mov %rax,0x10(%r15)
29d5: 41 8b 44 24 18 mov 0x18(%r12),%eax
dst->remote_vni = conf->vni;
29da: 41 89 47 18 mov %eax,0x18(%r15)
memcpy(&dst->remote_ip, &conf->remote_ip, sizeof(conf->remote_ip));
29de: 0f b7 83 80 08 00 00 movzwl 0x880(%rbx),%eax
vxlan_ether_setup(dev);
}
vxlan->net = src_net;
dst->remote_vni = conf->vni;
29e5: 66 85 c0 test %ax,%ax
memcpy(&dst->remote_ip, &conf->remote_ip, sizeof(conf->remote_ip));
29e8: 0f 85 b1 02 00 00 jne 2c9f <vxlan_dev_configure+0x39f>
29ee: 41 b8 02 00 00 00 mov $0x2,%r8d
29f4: 66 44 89 83 80 08 00 mov %r8w,0x880(%rbx)
29fb: 00
29fc: 66 83 bb 54 09 00 00 cmpw $0xa,0x954(%rbx)
2a03: 0a
2a04: 0f 84 e2 02 00 00 je 2cec <vxlan_dev_configure+0x3ec>
2a0a: 41 8b 44 24 4c mov 0x4c(%r12),%eax
/* Unless IPv6 is explicitly requested, assume IPv4 */
if (!dst->remote_ip.sa.sa_family)
2a0f: 85 c0 test %eax,%eax
2a11: 0f 85 26 04 00 00 jne 2e3d <vxlan_dev_configure+0x53d>
2a17: 41 8b 74 24 3c mov 0x3c(%r12),%esi
2a1c: 85 f6 test %esi,%esi
dst->remote_ip.sa.sa_family = AF_INET;
2a1e: 0f 85 e6 02 00 00 jne 2d0a <vxlan_dev_configure+0x40a>
2a24: 31 ff xor %edi,%edi
2a26: 8b 83 84 08 00 00 mov 0x884(%rbx),%eax
if (dst->remote_ip.sa.sa_family == AF_INET6 ||
2a2c: 25 f0 00 00 00 and $0xf0,%eax
2a31: 3d e0 00 00 00 cmp $0xe0,%eax
2a36: 0f 94 c0 sete %al
2a39: 84 c0 test %al,%al
return -EPFNOSUPPORT;
use_ipv6 = true;
vxlan->flags |= VXLAN_F_IPV6;
}
if (conf->label && !use_ipv6) {
2a3b: 0f 85 12 04 00 00 jne 2e53 <vxlan_dev_configure+0x553>
2a41: 41 8b 74 24 40 mov 0x40(%r12),%esi
2a46: 41 b8 0e 00 00 00 mov $0xe,%r8d
pr_info("label only supported in use with IPv6\n");
return -EINVAL;
}
if (conf->remote_ifindex) {
2a4c: 85 f6 test %esi,%esi
2a4e: 0f 85 c9 03 00 00 jne 2e1d <vxlan_dev_configure+0x51d>
struct vxlan_net *vn = net_generic(src_net, vxlan_net_id);
struct vxlan_dev *vxlan = netdev_priv(dev), *tmp;
struct vxlan_rdst *dst = &vxlan->default_dst;
unsigned short needed_headroom = ETH_HLEN;
int err;
bool use_ipv6 = false;
2a54: 40 84 ff test %dil,%dil
static inline bool vxlan_addr_multicast(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
return ipv6_addr_is_multicast(&ipa->sin6.sin6_addr);
else
return IN_MULTICAST(ntohl(ipa->sin.sin_addr.s_addr));
2a57: 75 0c jne 2a65 <vxlan_dev_configure+0x165>
2a59: 41 f6 44 24 51 20 testb $0x20,0x51(%r12)
2a5f: 41 8d 40 32 lea 0x32(%r8),%eax
2a63: 74 04 je 2a69 <vxlan_dev_configure+0x169>
2a65: 41 8d 40 46 lea 0x46(%r8),%eax
if (!conf->mtu)
dev->mtu = lowerdev->mtu - (use_ipv6 ? VXLAN6_HEADROOM : VXLAN_HEADROOM);
needed_headroom = lowerdev->hard_header_len;
} else if (vxlan_addr_multicast(&dst->remote_ip)) {
2a69: 66 89 83 50 02 00 00 mov %ax,0x250(%rbx)
2a70: 49 8b 14 24 mov (%r12),%rdx
pr_info("multicast destination requires interface to be specified\n");
return -EINVAL;
}
if (conf->mtu) {
2a74: 48 89 93 38 09 00 00 mov %rdx,0x938(%rbx)
struct vxlan_config *conf)
{
struct vxlan_net *vn = net_generic(src_net, vxlan_net_id);
struct vxlan_dev *vxlan = netdev_priv(dev), *tmp;
struct vxlan_rdst *dst = &vxlan->default_dst;
unsigned short needed_headroom = ETH_HLEN;
2a7b: 49 8b 54 24 08 mov 0x8(%r12),%rdx
} else if (vxlan_addr_multicast(&dst->remote_ip)) {
pr_info("multicast destination requires interface to be specified\n");
return -EINVAL;
}
if (conf->mtu) {
2a80: 48 89 93 40 09 00 00 mov %rdx,0x940(%rbx)
err = __vxlan_change_mtu(dev, lowerdev, dst, conf->mtu, false);
if (err)
return err;
}
if (use_ipv6 || conf->flags & VXLAN_F_COLLECT_METADATA)
2a87: 49 8b 54 24 10 mov 0x10(%r12),%rdx
2a8c: 48 89 93 48 09 00 00 mov %rdx,0x948(%rbx)
2a93: 49 8b 54 24 18 mov 0x18(%r12),%rdx
needed_headroom += VXLAN6_HEADROOM;
2a98: 48 89 93 50 09 00 00 mov %rdx,0x950(%rbx)
else
needed_headroom += VXLAN_HEADROOM;
dev->needed_headroom = needed_headroom;
2a9f: 49 8b 54 24 20 mov 0x20(%r12),%rdx
memcpy(&vxlan->cfg, conf, sizeof(*conf));
2aa4: 48 89 93 58 09 00 00 mov %rdx,0x958(%rbx)
2aab: 49 8b 54 24 28 mov 0x28(%r12),%rdx
2ab0: 48 89 93 60 09 00 00 mov %rdx,0x960(%rbx)
2ab7: 49 8b 54 24 30 mov 0x30(%r12),%rdx
2abc: 48 89 93 68 09 00 00 mov %rdx,0x968(%rbx)
2ac3: 49 8b 54 24 38 mov 0x38(%r12),%rdx
2ac8: 48 89 93 70 09 00 00 mov %rdx,0x970(%rbx)
2acf: 49 8b 54 24 40 mov 0x40(%r12),%rdx
2ad4: 48 89 93 78 09 00 00 mov %rdx,0x978(%rbx)
2adb: 49 8b 54 24 48 mov 0x48(%r12),%rdx
2ae0: 66 83 bb 7c 09 00 00 cmpw $0x0,0x97c(%rbx)
2ae7: 00
2ae8: 48 89 93 80 09 00 00 mov %rdx,0x980(%rbx)
2aef: 49 8b 54 24 50 mov 0x50(%r12),%rdx
2af4: 48 89 93 88 09 00 00 mov %rdx,0x988(%rbx)
2afb: 49 8b 54 24 58 mov 0x58(%r12),%rdx
2b00: 48 89 93 90 09 00 00 mov %rdx,0x990(%rbx)
2b07: 49 8b 54 24 60 mov 0x60(%r12),%rdx
2b0c: 48 89 93 98 09 00 00 mov %rdx,0x998(%rbx)
if (!vxlan->cfg.dst_port) {
2b13: 75 15 jne 2b2a <vxlan_dev_configure+0x22a>
2b15: 41 f6 44 24 51 40 testb $0x40,0x51(%r12)
needed_headroom += VXLAN6_HEADROOM;
else
needed_headroom += VXLAN_HEADROOM;
dev->needed_headroom = needed_headroom;
memcpy(&vxlan->cfg, conf, sizeof(*conf));
2b1b: b8 b6 12 00 00 mov $0x12b6,%eax
2b20: 0f 45 c8 cmovne %eax,%ecx
2b23: 66 89 8b 7c 09 00 00 mov %cx,0x97c(%rbx)
2b2a: 44 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%r8d
2b31: 45 0b 44 24 50 or 0x50(%r12),%r8d
2b36: 48 83 bb 90 09 00 00 cmpq $0x0,0x990(%rbx)
2b3d: 00
2b3e: 44 89 83 d8 08 00 00 mov %r8d,0x8d8(%rbx)
if (!vxlan->cfg.dst_port) {
if (conf->flags & VXLAN_F_GPE)
vxlan->cfg.dst_port = 4790; /* IANA assigned VXLAN-GPE port */
2b45: 75 0b jne 2b52 <vxlan_dev_configure+0x252>
2b47: 48 c7 83 90 09 00 00 movq $0x12c,0x990(%rbx)
2b4e: 2c 01 00 00
2b52: 49 8b 16 mov (%r14),%rdx
2b55: 49 39 d6 cmp %rdx,%r14
2b58: 48 8d 42 f0 lea -0x10(%rdx),%rax
else
vxlan->cfg.dst_port = default_port;
}
vxlan->flags |= conf->flags;
2b5c: 0f 84 91 00 00 00 je 2bf3 <vxlan_dev_configure+0x2f3>
2b62: 41 8b 74 24 38 mov 0x38(%r12),%esi
if (!vxlan->cfg.age_interval)
2b67: eb 0d jmp 2b76 <vxlan_dev_configure+0x276>
2b69: 48 8b 48 10 mov 0x10(%rax),%rcx
2b6d: 49 39 ce cmp %rcx,%r14
if (conf->flags & VXLAN_F_GPE)
vxlan->cfg.dst_port = 4790; /* IANA assigned VXLAN-GPE port */
else
vxlan->cfg.dst_port = default_port;
}
vxlan->flags |= conf->flags;
2b70: 48 8d 41 f0 lea -0x10(%rcx),%rax
2b74: 74 7d je 2bf3 <vxlan_dev_configure+0x2f3>
if (!vxlan->cfg.age_interval)
2b76: 39 b0 30 01 00 00 cmp %esi,0x130(%rax)
vxlan->cfg.age_interval = FDB_AGE_DEFAULT;
2b7c: 75 eb jne 2b69 <vxlan_dev_configure+0x269>
2b7e: 66 83 78 40 0a cmpw $0xa,0x40(%rax)
list_for_each_entry(tmp, &vn->vxlan_list, next) {
2b83: ba 01 00 00 00 mov $0x1,%edx
2b88: 74 0d je 2b97 <vxlan_dev_configure+0x297>
2b8a: 31 d2 xor %edx,%edx
2b8c: 66 83 b8 14 01 00 00 cmpw $0xa,0x114(%rax)
2b93: 0a
2b94: 0f 94 c2 sete %dl
2b97: 39 fa cmp %edi,%edx
2b99: 75 ce jne 2b69 <vxlan_dev_configure+0x269>
2b9b: 0f b7 93 7c 09 00 00 movzwl 0x97c(%rbx),%edx
2ba2: 66 39 90 3c 01 00 00 cmp %dx,0x13c(%rax)
if (tmp->cfg.vni == conf->vni &&
2ba9: 75 be jne 2b69 <vxlan_dev_configure+0x269>
2bab: 8b 90 98 00 00 00 mov 0x98(%rax),%edx
(tmp->default_dst.remote_ip.sa.sa_family == AF_INET6 ||
2bb1: 44 31 c2 xor %r8d,%edx
2bb4: 80 e6 7d and $0x7d,%dh
2bb7: 75 b0 jne 2b69 <vxlan_dev_configure+0x269>
2bb9: 0f ce bswap %esi
2bbb: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
2bc2: e8 00 00 00 00 callq 2bc7 <vxlan_dev_configure+0x2c7>
if (!vxlan->cfg.age_interval)
vxlan->cfg.age_interval = FDB_AGE_DEFAULT;
list_for_each_entry(tmp, &vn->vxlan_list, next) {
if (tmp->cfg.vni == conf->vni &&
2bc7: b8 ef ff ff ff mov $0xffffffef,%eax
(tmp->default_dst.remote_ip.sa.sa_family == AF_INET6 ||
tmp->cfg.saddr.sa.sa_family == AF_INET6) == use_ipv6 &&
2bcc: e9 bf 00 00 00 jmpq 2c90 <vxlan_dev_configure+0x390>
2bd1: 8b 86 3c 02 00 00 mov 0x23c(%rsi),%eax
2bd7: 48 c7 86 10 02 00 00 movq $0x0,0x210(%rsi)
2bde: 00 00 00 00
tmp->cfg.dst_port == vxlan->cfg.dst_port &&
2be2: 80 e4 f7 and $0xf7,%ah
2be5: 80 cc 80 or $0x80,%ah
2be8: 89 86 3c 02 00 00 mov %eax,0x23c(%rsi)
(tmp->flags & VXLAN_F_RCV_FLAGS) ==
(vxlan->flags & VXLAN_F_RCV_FLAGS)) {
pr_info("duplicate VNI %u\n", be32_to_cpu(conf->vni));
2bee: e9 ac fd ff ff jmpq 299f <vxlan_dev_configure+0x9f>
2bf3: 66 83 bb 80 08 00 00 cmpw $0xa,0x880(%rbx)
2bfa: 0a
return -EEXIST;
2bfb: 48 c7 83 18 02 00 00 movq $0x0,0x218(%rbx)
2c02: 00 00 00 00
}
static void vxlan_ether_setup(struct net_device *dev)
{
dev->priv_flags &= ~IFF_TX_SKB_SHARING;
dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
2c06: 0f 84 ca 00 00 00 je 2cd6 <vxlan_dev_configure+0x3d6>
dev->netdev_ops = &vxlan_netdev_ether_ops;
2c0c: 8b 93 84 08 00 00 mov 0x884(%rbx),%edx
}
static void vxlan_ether_setup(struct net_device *dev)
{
dev->priv_flags &= ~IFF_TX_SKB_SHARING;
dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
2c12: 85 d2 test %edx,%edx
2c14: 0f 94 c0 sete %al
2c17: 84 c0 test %al,%al
2c19: 75 44 jne 2c5f <vxlan_dev_configure+0x35f>
2c1b: 8b 83 a4 08 00 00 mov 0x8a4(%rbx),%eax
2c21: 44 0f b7 8b 7c 09 00 movzwl 0x97c(%rbx),%r9d
2c28: 00
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
static inline bool vxlan_addr_any(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
2c29: 41 b8 00 06 00 00 mov $0x600,%r8d
pr_info("duplicate VNI %u\n", be32_to_cpu(conf->vni));
return -EEXIST;
}
}
dev->ethtool_ops = &vxlan_ethtool_ops;
2c2f: c7 44 24 10 02 00 00 movl $0x2,0x10(%rsp)
2c36: 00
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
static inline bool vxlan_addr_any(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
2c37: b9 82 00 00 00 mov $0x82,%ecx
return ipv6_addr_any(&ipa->sin6.sin6_addr);
else
return ipa->sin.sin_addr.s_addr == htonl(INADDR_ANY);
2c3c: 4c 89 fa mov %r15,%rdx
2c3f: 48 c7 c6 00 00 00 00 mov $0x0,%rsi
2c46: 4c 89 ef mov %r13,%rdi
}
dev->ethtool_ops = &vxlan_ethtool_ops;
/* create an fdb entry for a valid default destination */
if (!vxlan_addr_any(&vxlan->default_dst.remote_ip)) {
2c49: 89 44 24 08 mov %eax,0x8(%rsp)
err = vxlan_fdb_create(vxlan, all_zeros_mac,
2c4d: 8b 83 a0 08 00 00 mov 0x8a0(%rbx),%eax
2c53: 89 04 24 mov %eax,(%rsp)
2c56: e8 35 f9 ff ff callq 2590 <vxlan_fdb_create>
2c5b: 85 c0 test %eax,%eax
2c5d: 75 31 jne 2c90 <vxlan_dev_configure+0x390>
2c5f: 48 89 df mov %rbx,%rdi
2c62: e8 00 00 00 00 callq 2c67 <vxlan_dev_configure+0x367>
2c67: 85 c0 test %eax,%eax
2c69: 0f 85 91 01 00 00 jne 2e00 <vxlan_dev_configure+0x500>
2c6f: 49 8b 16 mov (%r14),%rdx
2c72: 48 8d 83 50 08 00 00 lea 0x850(%rbx),%rax
2c79: 48 89 42 08 mov %rax,0x8(%rdx)
2c7d: 48 89 93 50 08 00 00 mov %rdx,0x850(%rbx)
2c84: 4c 89 b3 58 08 00 00 mov %r14,0x858(%rbx)
NLM_F_EXCL|NLM_F_CREATE,
vxlan->cfg.dst_port,
vxlan->default_dst.remote_vni,
vxlan->default_dst.remote_ifindex,
NTF_SELF);
if (err)
2c8b: 49 89 06 mov %rax,(%r14)
2c8e: 31 c0 xor %eax,%eax
return err;
}
err = register_netdevice(dev);
2c90: 48 83 c4 20 add $0x20,%rsp
2c94: 5b pop %rbx
2c95: 41 5c pop %r12
if (err) {
2c97: 41 5d pop %r13
2c99: 41 5e pop %r14
2c9b: 41 5f pop %r15
2c9d: 5d pop %rbp
2c9e: c3 retq
* Insert a new entry after the specified head.
* This is good for implementing stacks.
*/
static inline void list_add(struct list_head *new, struct list_head *head)
{
__list_add(new, head, head->next);
2c9f: 66 83 f8 0a cmp $0xa,%ax
vxlan_fdb_delete_default(vxlan);
return err;
}
list_add(&vxlan->next, &vn->vxlan_list);
2ca3: 0f 85 53 fd ff ff jne 29fc <vxlan_dev_configure+0xfc>
#ifndef CONFIG_DEBUG_LIST
static inline void __list_add(struct list_head *new,
struct list_head *prev,
struct list_head *next)
{
next->prev = new;
2ca9: 83 8b d8 08 00 00 20 orl $0x20,0x8d8(%rbx)
new->next = next;
2cb0: 41 8b 74 24 3c mov 0x3c(%r12),%esi
new->prev = prev;
2cb5: 85 f6 test %esi,%esi
2cb7: 0f 85 dc 00 00 00 jne 2d99 <vxlan_dev_configure+0x499>
{
switch (size) {
case 1: *(volatile __u8 *)p = *(__u8 *)res; break;
case 2: *(volatile __u16 *)p = *(__u16 *)res; break;
case 4: *(volatile __u32 *)p = *(__u32 *)res; break;
case 8: *(volatile __u64 *)p = *(__u64 *)res; break;
2cbd: 0f b6 83 88 08 00 00 movzbl 0x888(%rbx),%eax
return 0;
}
2cc4: bf 01 00 00 00 mov $0x1,%edi
2cc9: 3d ff 00 00 00 cmp $0xff,%eax
2cce: 0f 94 c0 sete %al
/* Unless IPv6 is explicitly requested, assume IPv4 */
if (!dst->remote_ip.sa.sa_family)
dst->remote_ip.sa.sa_family = AF_INET;
if (dst->remote_ip.sa.sa_family == AF_INET6 ||
2cd1: e9 63 fd ff ff jmpq 2a39 <vxlan_dev_configure+0x139>
2cd6: 48 8b 83 88 08 00 00 mov 0x888(%rbx),%rax
vxlan->cfg.saddr.sa.sa_family == AF_INET6) {
if (!IS_ENABLED(CONFIG_IPV6))
return -EPFNOSUPPORT;
use_ipv6 = true;
vxlan->flags |= VXLAN_F_IPV6;
2cdd: 48 0b 83 90 08 00 00 or 0x890(%rbx),%rax
if (conf->label && !use_ipv6) {
pr_info("label only supported in use with IPv6\n");
return -EINVAL;
}
if (conf->remote_ifindex) {
2ce4: 0f 94 c0 sete %al
2ce7: e9 2b ff ff ff jmpq 2c17 <vxlan_dev_configure+0x317>
2cec: 83 8b d8 08 00 00 20 orl $0x20,0x8d8(%rbx)
return (a->s6_addr32[0] & htonl(0xfffffff0)) == htonl(0x20010010);
}
static inline bool ipv6_addr_is_multicast(const struct in6_addr *addr)
{
return (addr->s6_addr32[0] & htonl(0xFF000000)) == htonl(0xFF000000);
2cf3: 41 8b 74 24 3c mov 0x3c(%r12),%esi
if (dst->remote_ip.sa.sa_family == AF_INET6 ||
vxlan->cfg.saddr.sa.sa_family == AF_INET6) {
if (!IS_ENABLED(CONFIG_IPV6))
return -EPFNOSUPPORT;
use_ipv6 = true;
2cf8: 85 f6 test %esi,%esi
2cfa: 0f 85 99 00 00 00 jne 2d99 <vxlan_dev_configure+0x499>
2d00: bf 01 00 00 00 mov $0x1,%edi
2d05: e9 1c fd ff ff jmpq 2a26 <vxlan_dev_configure+0x126>
static inline bool ipv6_addr_any(const struct in6_addr *a)
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
const unsigned long *ul = (const unsigned long *)a;
return (ul[0] | ul[1]) == 0UL;
2d0a: 89 4d d4 mov %ecx,-0x2c(%rbp)
2d0d: e8 00 00 00 00 callq 2d12 <vxlan_dev_configure+0x412>
2d12: 41 8b 74 24 3c mov 0x3c(%r12),%esi
2d17: 48 85 c0 test %rax,%rax
2d1a: 8b 4d d4 mov -0x2c(%rbp),%ecx
vxlan->flags |= VXLAN_F_IPV6;
2d1d: 89 b3 a4 08 00 00 mov %esi,0x8a4(%rbx)
if (conf->label && !use_ipv6) {
pr_info("label only supported in use with IPv6\n");
return -EINVAL;
}
if (conf->remote_ifindex) {
2d23: 0f 84 fe 00 00 00 je 2e27 <vxlan_dev_configure+0x527>
2d29: 41 8b 54 24 40 mov 0x40(%r12),%edx
2d2e: 31 ff xor %edi,%edi
if (dst->remote_ip.sa.sa_family == AF_INET6 ||
vxlan->cfg.saddr.sa.sa_family == AF_INET6) {
if (!IS_ENABLED(CONFIG_IPV6))
return -EPFNOSUPPORT;
use_ipv6 = true;
2d30: 85 d2 test %edx,%edx
2d32: 0f 85 b9 00 00 00 jne 2df1 <vxlan_dev_configure+0x4f1>
2d38: 8b 90 48 02 00 00 mov 0x248(%rax),%edx
pr_info("label only supported in use with IPv6\n");
return -EINVAL;
}
if (conf->remote_ifindex) {
lowerdev = __dev_get_by_index(src_net, conf->remote_ifindex);
2d3e: 31 ff xor %edi,%edi
2d40: be 32 00 00 00 mov $0x32,%esi
dst->remote_ifindex = conf->remote_ifindex;
2d45: 29 f2 sub %esi,%edx
if (!lowerdev) {
2d47: 89 93 48 02 00 00 mov %edx,0x248(%rbx)
return -EINVAL;
}
if (conf->remote_ifindex) {
lowerdev = __dev_get_by_index(src_net, conf->remote_ifindex);
dst->remote_ifindex = conf->remote_ifindex;
2d4d: 41 8b 54 24 40 mov 0x40(%r12),%edx
2d52: 44 0f b7 80 4e 02 00 movzwl 0x24e(%rax),%r8d
2d59: 00
return -EPERM;
}
}
#endif
if (!conf->mtu)
2d5a: 85 d2 test %edx,%edx
2d5c: 0f 84 f2 fc ff ff je 2a54 <vxlan_dev_configure+0x154>
2d62: 89 d6 mov %edx,%esi
2d64: 8b 80 48 02 00 00 mov 0x248(%rax),%eax
dev->mtu = lowerdev->mtu - (use_ipv6 ? VXLAN6_HEADROOM : VXLAN_HEADROOM);
2d6a: 8d 50 ba lea -0x46(%rax),%edx
2d6d: 83 e8 32 sub $0x32,%eax
2d70: 66 83 bb 80 08 00 00 cmpw $0xa,0x880(%rbx)
2d77: 0a
2d78: 0f 45 d0 cmovne %eax,%edx
2d7b: 83 fe 43 cmp $0x43,%esi
2d7e: b8 ea ff ff ff mov $0xffffffea,%eax
needed_headroom = lowerdev->hard_header_len;
2d83: 0f 8e 07 ff ff ff jle 2c90 <vxlan_dev_configure+0x390>
2d89: 39 f2 cmp %esi,%edx
} else if (vxlan_addr_multicast(&dst->remote_ip)) {
pr_info("multicast destination requires interface to be specified\n");
return -EINVAL;
}
if (conf->mtu) {
2d8b: 0f 4f d6 cmovg %esi,%edx
2d8e: 89 93 48 02 00 00 mov %edx,0x248(%rbx)
struct vxlan_rdst *dst, int new_mtu, bool strict)
{
int max_mtu = IP_MAX_MTU;
if (lowerdev)
max_mtu = lowerdev->mtu;
2d94: e9 bb fc ff ff jmpq 2a54 <vxlan_dev_configure+0x154>
2d99: 89 4d d4 mov %ecx,-0x2c(%rbp)
if (dst->remote_ip.sa.sa_family == AF_INET6)
max_mtu -= VXLAN6_HEADROOM;
2d9c: e8 00 00 00 00 callq 2da1 <vxlan_dev_configure+0x4a1>
2da1: 41 8b 74 24 3c mov 0x3c(%r12),%esi
2da6: 48 85 c0 test %rax,%rax
2da9: 8b 4d d4 mov -0x2c(%rbp),%ecx
else
max_mtu -= VXLAN_HEADROOM;
if (new_mtu < 68)
2dac: 89 b3 a4 08 00 00 mov %esi,0x8a4(%rbx)
return -EINVAL;
2db2: 74 73 je 2e27 <vxlan_dev_configure+0x527>
if (dst->remote_ip.sa.sa_family == AF_INET6)
max_mtu -= VXLAN6_HEADROOM;
else
max_mtu -= VXLAN_HEADROOM;
if (new_mtu < 68)
2db4: 48 8b 90 08 03 00 00 mov 0x308(%rax),%rdx
return -EINVAL;
new_mtu = max_mtu;
}
dev->mtu = new_mtu;
2dbb: 48 85 d2 test %rdx,%rdx
2dbe: 74 0e je 2dce <vxlan_dev_configure+0x4ce>
2dc0: 8b b2 38 02 00 00 mov 0x238(%rdx),%esi
2dc6: 85 f6 test %esi,%esi
2dc8: 0f 85 b1 00 00 00 jne 2e7f <vxlan_dev_configure+0x57f>
pr_info("label only supported in use with IPv6\n");
return -EINVAL;
}
if (conf->remote_ifindex) {
lowerdev = __dev_get_by_index(src_net, conf->remote_ifindex);
2dce: 41 8b 54 24 40 mov 0x40(%r12),%edx
dst->remote_ifindex = conf->remote_ifindex;
2dd3: bf 01 00 00 00 mov $0x1,%edi
if (!lowerdev) {
2dd8: 85 d2 test %edx,%edx
2dda: 75 15 jne 2df1 <vxlan_dev_configure+0x4f1>
return -EINVAL;
}
if (conf->remote_ifindex) {
lowerdev = __dev_get_by_index(src_net, conf->remote_ifindex);
dst->remote_ifindex = conf->remote_ifindex;
2ddc: 8b 90 48 02 00 00 mov 0x248(%rax),%edx
if (!lowerdev) {
2de2: bf 01 00 00 00 mov $0x1,%edi
})
static __always_inline
void __read_once_size(const volatile void *p, void *res, int size)
{
__READ_ONCE_SIZE;
2de7: be 46 00 00 00 mov $0x46,%esi
}
#if IS_ENABLED(CONFIG_IPV6)
if (use_ipv6) {
struct inet6_dev *idev = __in6_dev_get(lowerdev);
if (idev && idev->cnf.disable_ipv6) {
2dec: e9 54 ff ff ff jmpq 2d45 <vxlan_dev_configure+0x445>
2df1: 44 0f b7 80 4e 02 00 movzwl 0x24e(%rax),%r8d
2df8: 00
2df9: 89 d6 mov %edx,%esi
2dfb: e9 64 ff ff ff jmpq 2d64 <vxlan_dev_configure+0x464>
return -EPERM;
}
}
#endif
if (!conf->mtu)
2e00: 4c 89 ef mov %r13,%rdi
2e03: 89 45 d4 mov %eax,-0x2c(%rbp)
2e06: e8 c5 f1 ff ff callq 1fd0 <vxlan_fdb_delete_default>
2e0b: 8b 45 d4 mov -0x2c(%rbp),%eax
dev->mtu = lowerdev->mtu - (use_ipv6 ? VXLAN6_HEADROOM : VXLAN_HEADROOM);
2e0e: 48 83 c4 20 add $0x20,%rsp
2e12: 5b pop %rbx
2e13: 41 5c pop %r12
2e15: 41 5d pop %r13
2e17: 41 5e pop %r14
2e19: 41 5f pop %r15
2e1b: 5d pop %rbp
2e1c: c3 retq
2e1d: b8 ff ff 00 00 mov $0xffff,%eax
needed_headroom = lowerdev->hard_header_len;
2e22: e9 43 ff ff ff jmpq 2d6a <vxlan_dev_configure+0x46a>
2e27: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
2e2e: e8 00 00 00 00 callq 2e33 <vxlan_dev_configure+0x533>
return err;
}
err = register_netdevice(dev);
if (err) {
vxlan_fdb_delete_default(vxlan);
2e33: b8 ed ff ff ff mov $0xffffffed,%eax
2e38: e9 53 fe ff ff jmpq 2c90 <vxlan_dev_configure+0x390>
return err;
2e3d: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
}
list_add(&vxlan->next, &vn->vxlan_list);
return 0;
}
2e44: e8 00 00 00 00 callq 2e49 <vxlan_dev_configure+0x549>
2e49: b8 ea ff ff ff mov $0xffffffea,%eax
static int __vxlan_change_mtu(struct net_device *dev,
struct net_device *lowerdev,
struct vxlan_rdst *dst, int new_mtu, bool strict)
{
int max_mtu = IP_MAX_MTU;
2e4e: e9 3d fe ff ff jmpq 2c90 <vxlan_dev_configure+0x390>
2e53: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
if (conf->remote_ifindex) {
lowerdev = __dev_get_by_index(src_net, conf->remote_ifindex);
dst->remote_ifindex = conf->remote_ifindex;
if (!lowerdev) {
pr_info("ifindex %d does not exist\n", dst->remote_ifindex);
2e5a: e8 00 00 00 00 callq 2e5f <vxlan_dev_configure+0x55f>
2e5f: b8 ea ff ff ff mov $0xffffffea,%eax
return -ENODEV;
2e64: e9 27 fe ff ff jmpq 2c90 <vxlan_dev_configure+0x390>
2e69: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
use_ipv6 = true;
vxlan->flags |= VXLAN_F_IPV6;
}
if (conf->label && !use_ipv6) {
pr_info("label only supported in use with IPv6\n");
2e70: e8 00 00 00 00 callq 2e75 <vxlan_dev_configure+0x575>
2e75: b8 ea ff ff ff mov $0xffffffea,%eax
return -EINVAL;
2e7a: e9 11 fe ff ff jmpq 2c90 <vxlan_dev_configure+0x390>
2e7f: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
if (!conf->mtu)
dev->mtu = lowerdev->mtu - (use_ipv6 ? VXLAN6_HEADROOM : VXLAN_HEADROOM);
needed_headroom = lowerdev->hard_header_len;
} else if (vxlan_addr_multicast(&dst->remote_ip)) {
pr_info("multicast destination requires interface to be specified\n");
2e86: e8 00 00 00 00 callq 2e8b <vxlan_dev_configure+0x58b>
2e8b: b8 ff ff ff ff mov $0xffffffff,%eax
return -EINVAL;
2e90: e9 fb fd ff ff jmpq 2c90 <vxlan_dev_configure+0x390>
2e95: 90 nop
2e96: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
2e9d: 00 00 00
0000000000002ea0 <vxlan_newlink>:
* This can be relaxed later; in such case, the other side
* of the PtP link will have to be provided.
*/
if ((conf->flags & ~VXLAN_F_ALLOWED_GPE) ||
!(conf->flags & VXLAN_F_COLLECT_METADATA)) {
pr_info("unsupported combination of extensions\n");
2ea0: e8 00 00 00 00 callq 2ea5 <vxlan_newlink+0x5>
return -EINVAL;
2ea5: 55 push %rbp
2ea6: 48 89 e5 mov %rsp,%rbp
2ea9: 41 57 push %r15
2eab: 41 56 push %r14
2ead: 41 55 push %r13
#if IS_ENABLED(CONFIG_IPV6)
if (use_ipv6) {
struct inet6_dev *idev = __in6_dev_get(lowerdev);
if (idev && idev->cnf.disable_ipv6) {
pr_info("IPv6 is disabled via sysctl\n");
2eaf: 41 54 push %r12
2eb1: 49 89 fd mov %rdi,%r13
2eb4: 53 push %rbx
2eb5: 48 89 cb mov %rcx,%rbx
2eb8: b9 0d 00 00 00 mov $0xd,%ecx
return -EPERM;
2ebd: 49 89 f6 mov %rsi,%r14
2ec0: 49 89 d7 mov %rdx,%r15
2ec3: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
2ec7: 48 83 c4 80 add $0xffffffffffffff80,%rsp
2ecb: 4c 8d 64 24 10 lea 0x10(%rsp),%r12
return 0;
}
static int vxlan_newlink(struct net *src_net, struct net_device *dev,
struct nlattr *tb[], struct nlattr *data[])
{
2ed0: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
2ed7: 00 00
2ed9: 48 89 44 24 78 mov %rax,0x78(%rsp)
2ede: 31 c0 xor %eax,%eax
2ee0: 4c 89 e7 mov %r12,%rdi
2ee3: f3 48 ab rep stos %rax,%es:(%rdi)
2ee6: 48 8b 43 08 mov 0x8(%rbx),%rax
struct vxlan_config conf;
memset(&conf, 0, sizeof(conf));
2eea: 48 85 c0 test %rax,%rax
return 0;
}
static int vxlan_newlink(struct net *src_net, struct net_device *dev,
struct nlattr *tb[], struct nlattr *data[])
{
2eed: 74 09 je 2ef8 <vxlan_newlink+0x58>
2eef: 8b 40 04 mov 0x4(%rax),%eax
2ef2: 0f c8 bswap %eax
2ef4: 89 44 24 48 mov %eax,0x48(%rsp)
2ef8: 48 8b 43 10 mov 0x10(%rbx),%rax
struct vxlan_config conf;
memset(&conf, 0, sizeof(conf));
2efc: 48 85 c0 test %rax,%rax
2eff: 0f 84 df 02 00 00 je 31e4 <vxlan_newlink+0x344>
return 0;
}
static int vxlan_newlink(struct net *src_net, struct net_device *dev,
struct nlattr *tb[], struct nlattr *data[])
{
2f05: 8b 40 04 mov 0x4(%rax),%eax
2f08: 89 44 24 14 mov %eax,0x14(%rsp)
2f0c: 48 8b 43 20 mov 0x20(%rbx),%rax
struct vxlan_config conf;
memset(&conf, 0, sizeof(conf));
2f10: 48 85 c0 test %rax,%rax
2f13: 0f 84 8c 02 00 00 je 31a5 <vxlan_newlink+0x305>
if (data[IFLA_VXLAN_ID])
2f19: 8b 40 04 mov 0x4(%rax),%eax
2f1c: ba 02 00 00 00 mov $0x2,%edx
conf.vni = cpu_to_be32(nla_get_u32(data[IFLA_VXLAN_ID]));
2f21: 66 89 54 24 2c mov %dx,0x2c(%rsp)
2f26: 89 44 24 30 mov %eax,0x30(%rsp)
if (data[IFLA_VXLAN_GROUP]) {
2f2a: 48 8b 43 18 mov 0x18(%rbx),%rax
2f2e: 48 85 c0 test %rax,%rax
2f31: 74 07 je 2f3a <vxlan_newlink+0x9a>
2f33: 8b 40 04 mov 0x4(%rax),%eax
* nla_get_in_addr - return payload of IPv4 address attribute
* @nla: IPv4 address netlink attribute
*/
static inline __be32 nla_get_in_addr(const struct nlattr *nla)
{
return *(__be32 *) nla_data(nla);
2f36: 89 44 24 4c mov %eax,0x4c(%rsp)
conf.remote_ip.sin.sin_addr.s_addr = nla_get_in_addr(data[IFLA_VXLAN_GROUP]);
2f3a: 48 8b 43 30 mov 0x30(%rbx),%rax
conf.remote_ip.sin6.sin6_addr = nla_get_in6_addr(data[IFLA_VXLAN_GROUP6]);
conf.remote_ip.sa.sa_family = AF_INET6;
}
if (data[IFLA_VXLAN_LOCAL]) {
2f3e: 48 85 c0 test %rax,%rax
2f41: 74 08 je 2f4b <vxlan_newlink+0xab>
2f43: 0f b6 40 04 movzbl 0x4(%rax),%eax
2f47: 88 44 24 5a mov %al,0x5a(%rsp)
2f4b: 48 8b 43 28 mov 0x28(%rbx),%rax
conf.saddr.sin.sin_addr.s_addr = nla_get_in_addr(data[IFLA_VXLAN_LOCAL]);
conf.saddr.sa.sa_family = AF_INET;
2f4f: 48 85 c0 test %rax,%rax
2f52: 74 08 je 2f5c <vxlan_newlink+0xbc>
2f54: 0f b6 40 04 movzbl 0x4(%rax),%eax
conf.remote_ip.sin6.sin6_addr = nla_get_in6_addr(data[IFLA_VXLAN_GROUP6]);
conf.remote_ip.sa.sa_family = AF_INET6;
}
if (data[IFLA_VXLAN_LOCAL]) {
conf.saddr.sin.sin_addr.s_addr = nla_get_in_addr(data[IFLA_VXLAN_LOCAL]);
2f58: 88 44 24 5b mov %al,0x5b(%rsp)
/* TODO: respect scope id */
conf.saddr.sin6.sin6_addr = nla_get_in6_addr(data[IFLA_VXLAN_LOCAL6]);
conf.saddr.sa.sa_family = AF_INET6;
}
if (data[IFLA_VXLAN_LINK])
2f5c: 48 8b 83 d0 00 00 00 mov 0xd0(%rbx),%rax
conf.remote_ifindex = nla_get_u32(data[IFLA_VXLAN_LINK]);
2f63: 48 85 c0 test %rax,%rax
2f66: 74 0c je 2f74 <vxlan_newlink+0xd4>
2f68: 8b 40 04 mov 0x4(%rax),%eax
if (data[IFLA_VXLAN_TOS])
2f6b: 25 00 0f ff ff and $0xffff0f00,%eax
2f70: 89 44 24 5c mov %eax,0x5c(%rsp)
* nla_get_u8 - return payload of u8 attribute
* @nla: u8 netlink attribute
*/
static inline u8 nla_get_u8(const struct nlattr *nla)
{
return *(u8 *) nla_data(nla);
2f74: 48 8b 43 38 mov 0x38(%rbx),%rax
conf.tos = nla_get_u8(data[IFLA_VXLAN_TOS]);
2f78: 48 85 c0 test %rax,%rax
if (data[IFLA_VXLAN_TTL])
2f7b: 0f 84 a7 01 00 00 je 3128 <vxlan_newlink+0x288>
2f81: 80 78 04 00 cmpb $0x0,0x4(%rax)
2f85: 0f 85 9d 01 00 00 jne 3128 <vxlan_newlink+0x288>
conf.ttl = nla_get_u8(data[IFLA_VXLAN_TTL]);
2f8b: 48 8b 43 40 mov 0x40(%rbx),%rax
if (data[IFLA_VXLAN_LABEL])
2f8f: 48 85 c0 test %rax,%rax
2f92: 74 08 je 2f9c <vxlan_newlink+0xfc>
2f94: 8b 40 04 mov 0x4(%rax),%eax
2f97: 48 89 44 24 68 mov %rax,0x68(%rsp)
conf.label = nla_get_be32(data[IFLA_VXLAN_LABEL]) &
2f9c: 48 8b 43 58 mov 0x58(%rbx),%rax
2fa0: 48 85 c0 test %rax,%rax
2fa3: 74 0a je 2faf <vxlan_newlink+0x10f>
IPV6_FLOWLABEL_MASK;
if (!data[IFLA_VXLAN_LEARNING] || nla_get_u8(data[IFLA_VXLAN_LEARNING]))
2fa5: 80 78 04 00 cmpb $0x0,0x4(%rax)
2fa9: 0f 85 ec 01 00 00 jne 319b <vxlan_newlink+0x2fb>
2faf: 48 8b 43 60 mov 0x60(%rbx),%rax
2fb3: 48 85 c0 test %rax,%rax
2fb6: 74 0a je 2fc2 <vxlan_newlink+0x122>
2fb8: 80 78 04 00 cmpb $0x0,0x4(%rax)
conf.flags |= VXLAN_F_LEARN;
if (data[IFLA_VXLAN_AGEING])
2fbc: 0f 85 cf 01 00 00 jne 3191 <vxlan_newlink+0x2f1>
2fc2: 48 8b 43 68 mov 0x68(%rbx),%rax
conf.age_interval = nla_get_u32(data[IFLA_VXLAN_AGEING]);
2fc6: 48 85 c0 test %rax,%rax
2fc9: 74 0a je 2fd5 <vxlan_newlink+0x135>
2fcb: 80 78 04 00 cmpb $0x0,0x4(%rax)
if (data[IFLA_VXLAN_PROXY] && nla_get_u8(data[IFLA_VXLAN_PROXY]))
2fcf: 0f 85 b2 01 00 00 jne 3187 <vxlan_newlink+0x2e7>
2fd5: 48 8b 43 70 mov 0x70(%rbx),%rax
2fd9: 48 85 c0 test %rax,%rax
2fdc: 74 0a je 2fe8 <vxlan_newlink+0x148>
2fde: 80 78 04 00 cmpb $0x0,0x4(%rax)
conf.flags |= VXLAN_F_PROXY;
if (data[IFLA_VXLAN_RSC] && nla_get_u8(data[IFLA_VXLAN_RSC]))
2fe2: 0f 85 95 01 00 00 jne 317d <vxlan_newlink+0x2dd>
2fe8: 48 8b 43 48 mov 0x48(%rbx),%rax
2fec: 48 85 c0 test %rax,%rax
2fef: 74 07 je 2ff8 <vxlan_newlink+0x158>
2ff1: 8b 40 04 mov 0x4(%rax),%eax
conf.flags |= VXLAN_F_RSC;
if (data[IFLA_VXLAN_L2MISS] && nla_get_u8(data[IFLA_VXLAN_L2MISS]))
2ff4: 89 44 24 70 mov %eax,0x70(%rsp)
2ff8: 48 8b 83 c8 00 00 00 mov 0xc8(%rbx),%rax
2fff: 48 85 c0 test %rax,%rax
3002: 74 0a je 300e <vxlan_newlink+0x16e>
3004: 80 78 04 00 cmpb $0x0,0x4(%rax)
conf.flags |= VXLAN_F_L2MISS;
if (data[IFLA_VXLAN_L3MISS] && nla_get_u8(data[IFLA_VXLAN_L3MISS]))
3008: 0f 85 62 01 00 00 jne 3170 <vxlan_newlink+0x2d0>
300e: 48 8b 43 50 mov 0x50(%rbx),%rax
3012: 48 85 c0 test %rax,%rax
3015: 74 1a je 3031 <vxlan_newlink+0x191>
3017: 0f b7 50 04 movzwl 0x4(%rax),%edx
conf.flags |= VXLAN_F_L3MISS;
if (data[IFLA_VXLAN_LIMIT])
301b: 66 c1 c2 08 rol $0x8,%dx
301f: 66 89 54 24 56 mov %dx,0x56(%rsp)
conf.addrmax = nla_get_u32(data[IFLA_VXLAN_LIMIT]);
3024: 0f b7 40 06 movzwl 0x6(%rax),%eax
if (data[IFLA_VXLAN_COLLECT_METADATA] &&
3028: 66 c1 c0 08 rol $0x8,%ax
302c: 66 89 44 24 58 mov %ax,0x58(%rsp)
3031: 48 8b 43 78 mov 0x78(%rbx),%rax
3035: 48 85 c0 test %rax,%rax
3038: 74 09 je 3043 <vxlan_newlink+0x1a3>
303a: 0f b7 40 04 movzwl 0x4(%rax),%eax
nla_get_u8(data[IFLA_VXLAN_COLLECT_METADATA]))
conf.flags |= VXLAN_F_COLLECT_METADATA;
if (data[IFLA_VXLAN_PORT_RANGE]) {
303e: 66 89 44 24 54 mov %ax,0x54(%rsp)
3043: 48 8b 83 90 00 00 00 mov 0x90(%rbx),%rax
const struct ifla_vxlan_port_range *p
= nla_data(data[IFLA_VXLAN_PORT_RANGE]);
conf.port_min = ntohs(p->low);
304a: 48 85 c0 test %rax,%rax
304d: 74 0a je 3059 <vxlan_newlink+0x1b9>
304f: 80 78 04 00 cmpb $0x0,0x4(%rax)
3053: 0f 84 0d 01 00 00 je 3166 <vxlan_newlink+0x2c6>
conf.port_max = ntohs(p->high);
3059: 48 8b 83 98 00 00 00 mov 0x98(%rbx),%rax
3060: 48 85 c0 test %rax,%rax
}
if (data[IFLA_VXLAN_PORT])
3063: 74 0a je 306f <vxlan_newlink+0x1cf>
3065: 80 78 04 00 cmpb $0x0,0x4(%rax)
3069: 0f 85 ea 00 00 00 jne 3159 <vxlan_newlink+0x2b9>
conf.dst_port = nla_get_be16(data[IFLA_VXLAN_PORT]);
306f: 48 8b 83 a0 00 00 00 mov 0xa0(%rbx),%rax
if (data[IFLA_VXLAN_UDP_CSUM] &&
3076: 48 85 c0 test %rax,%rax
3079: 74 0a je 3085 <vxlan_newlink+0x1e5>
307b: 80 78 04 00 cmpb $0x0,0x4(%rax)
307f: 0f 85 c7 00 00 00 jne 314c <vxlan_newlink+0x2ac>
3085: 48 8b 83 a8 00 00 00 mov 0xa8(%rbx),%rax
!nla_get_u8(data[IFLA_VXLAN_UDP_CSUM]))
conf.flags |= VXLAN_F_UDP_ZERO_CSUM_TX;
if (data[IFLA_VXLAN_UDP_ZERO_CSUM6_TX] &&
308c: 48 85 c0 test %rax,%rax
308f: 74 0a je 309b <vxlan_newlink+0x1fb>
3091: 80 78 04 00 cmpb $0x0,0x4(%rax)
3095: 0f 85 a4 00 00 00 jne 313f <vxlan_newlink+0x29f>
309b: 48 8b 83 b0 00 00 00 mov 0xb0(%rbx),%rax
nla_get_u8(data[IFLA_VXLAN_UDP_ZERO_CSUM6_TX]))
conf.flags |= VXLAN_F_UDP_ZERO_CSUM6_TX;
if (data[IFLA_VXLAN_UDP_ZERO_CSUM6_RX] &&
30a2: 48 85 c0 test %rax,%rax
30a5: 74 0a je 30b1 <vxlan_newlink+0x211>
30a7: 80 78 04 00 cmpb $0x0,0x4(%rax)
30ab: 0f 85 81 00 00 00 jne 3132 <vxlan_newlink+0x292>
30b1: 48 83 bb b8 00 00 00 cmpq $0x0,0xb8(%rbx)
30b8: 00
nla_get_u8(data[IFLA_VXLAN_UDP_ZERO_CSUM6_RX]))
conf.flags |= VXLAN_F_UDP_ZERO_CSUM6_RX;
if (data[IFLA_VXLAN_REMCSUM_TX] &&
30b9: 74 08 je 30c3 <vxlan_newlink+0x223>
30bb: 81 4c 24 60 00 08 00 orl $0x800,0x60(%rsp)
30c2: 00
30c3: 48 83 bb d8 00 00 00 cmpq $0x0,0xd8(%rbx)
30ca: 00
nla_get_u8(data[IFLA_VXLAN_REMCSUM_TX]))
conf.flags |= VXLAN_F_REMCSUM_TX;
if (data[IFLA_VXLAN_REMCSUM_RX] &&
30cb: 74 08 je 30d5 <vxlan_newlink+0x235>
30cd: 81 4c 24 60 00 40 00 orl $0x4000,0x60(%rsp)
30d4: 00
30d5: 48 83 bb c0 00 00 00 cmpq $0x0,0xc0(%rbx)
30dc: 00
30dd: 74 08 je 30e7 <vxlan_newlink+0x247>
30df: 81 4c 24 60 00 10 00 orl $0x1000,0x60(%rsp)
30e6: 00
nla_get_u8(data[IFLA_VXLAN_REMCSUM_RX]))
conf.flags |= VXLAN_F_REMCSUM_RX;
if (data[IFLA_VXLAN_GBP])
30e7: 49 8b 47 20 mov 0x20(%r15),%rax
conf.flags |= VXLAN_F_GBP;
30eb: 48 85 c0 test %rax,%rax
30ee: 74 07 je 30f7 <vxlan_newlink+0x257>
30f0: 8b 40 04 mov 0x4(%rax),%eax
if (data[IFLA_VXLAN_GPE])
30f3: 89 44 24 50 mov %eax,0x50(%rsp)
30f7: 4c 89 e2 mov %r12,%rdx
30fa: 4c 89 f6 mov %r14,%rsi
conf.flags |= VXLAN_F_GPE;
30fd: 4c 89 ef mov %r13,%rdi
3100: e8 fb f7 ff ff callq 2900 <vxlan_dev_configure>
if (data[IFLA_VXLAN_REMCSUM_NOPARTIAL])
3105: 48 8b 4c 24 78 mov 0x78(%rsp),%rcx
310a: 65 48 33 0c 25 28 00 xor %gs:0x28,%rcx
3111: 00 00
conf.flags |= VXLAN_F_REMCSUM_NOPARTIAL;
3113: 0f 85 0a 01 00 00 jne 3223 <vxlan_newlink+0x383>
if (tb[IFLA_MTU])
3119: 48 8d 65 d8 lea -0x28(%rbp),%rsp
311d: 5b pop %rbx
311e: 41 5c pop %r12
conf.mtu = nla_get_u32(tb[IFLA_MTU]);
3120: 41 5d pop %r13
3122: 41 5e pop %r14
3124: 41 5f pop %r15
3126: 5d pop %rbp
return vxlan_dev_configure(src_net, dev, &conf);
3127: c3 retq
3128: 83 4c 24 60 01 orl $0x1,0x60(%rsp)
312d: e9 59 fe ff ff jmpq 2f8b <vxlan_newlink+0xeb>
3132: 81 4c 24 60 00 04 00 orl $0x400,0x60(%rsp)
3139: 00
}
313a: e9 72 ff ff ff jmpq 30b1 <vxlan_newlink+0x211>
313f: 81 4c 24 60 00 02 00 orl $0x200,0x60(%rsp)
3146: 00
3147: e9 4f ff ff ff jmpq 309b <vxlan_newlink+0x1fb>
314c: 81 4c 24 60 00 01 00 orl $0x100,0x60(%rsp)
3153: 00
3154: e9 2c ff ff ff jmpq 3085 <vxlan_newlink+0x1e5>
if (data[IFLA_VXLAN_LABEL])
conf.label = nla_get_be32(data[IFLA_VXLAN_LABEL]) &
IPV6_FLOWLABEL_MASK;
if (!data[IFLA_VXLAN_LEARNING] || nla_get_u8(data[IFLA_VXLAN_LEARNING]))
conf.flags |= VXLAN_F_LEARN;
3159: 81 4c 24 60 80 00 00 orl $0x80,0x60(%rsp)
3160: 00
3161: e9 09 ff ff ff jmpq 306f <vxlan_newlink+0x1cf>
nla_get_u8(data[IFLA_VXLAN_REMCSUM_TX]))
conf.flags |= VXLAN_F_REMCSUM_TX;
if (data[IFLA_VXLAN_REMCSUM_RX] &&
nla_get_u8(data[IFLA_VXLAN_REMCSUM_RX]))
conf.flags |= VXLAN_F_REMCSUM_RX;
3166: 83 4c 24 60 40 orl $0x40,0x60(%rsp)
316b: e9 e9 fe ff ff jmpq 3059 <vxlan_newlink+0x1b9>
nla_get_u8(data[IFLA_VXLAN_UDP_ZERO_CSUM6_RX]))
conf.flags |= VXLAN_F_UDP_ZERO_CSUM6_RX;
if (data[IFLA_VXLAN_REMCSUM_TX] &&
nla_get_u8(data[IFLA_VXLAN_REMCSUM_TX]))
conf.flags |= VXLAN_F_REMCSUM_TX;
3170: 81 4c 24 60 00 20 00 orl $0x2000,0x60(%rsp)
3177: 00
3178: e9 91 fe ff ff jmpq 300e <vxlan_newlink+0x16e>
nla_get_u8(data[IFLA_VXLAN_UDP_ZERO_CSUM6_TX]))
conf.flags |= VXLAN_F_UDP_ZERO_CSUM6_TX;
if (data[IFLA_VXLAN_UDP_ZERO_CSUM6_RX] &&
nla_get_u8(data[IFLA_VXLAN_UDP_ZERO_CSUM6_RX]))
conf.flags |= VXLAN_F_UDP_ZERO_CSUM6_RX;
317d: 83 4c 24 60 10 orl $0x10,0x60(%rsp)
3182: e9 61 fe ff ff jmpq 2fe8 <vxlan_newlink+0x148>
3187: 83 4c 24 60 08 orl $0x8,0x60(%rsp)
!nla_get_u8(data[IFLA_VXLAN_UDP_CSUM]))
conf.flags |= VXLAN_F_UDP_ZERO_CSUM_TX;
if (data[IFLA_VXLAN_UDP_ZERO_CSUM6_TX] &&
nla_get_u8(data[IFLA_VXLAN_UDP_ZERO_CSUM6_TX]))
conf.flags |= VXLAN_F_UDP_ZERO_CSUM6_TX;
318c: e9 44 fe ff ff jmpq 2fd5 <vxlan_newlink+0x135>
3191: 83 4c 24 60 04 orl $0x4,0x60(%rsp)
if (data[IFLA_VXLAN_PORT])
conf.dst_port = nla_get_be16(data[IFLA_VXLAN_PORT]);
if (data[IFLA_VXLAN_UDP_CSUM] &&
!nla_get_u8(data[IFLA_VXLAN_UDP_CSUM]))
conf.flags |= VXLAN_F_UDP_ZERO_CSUM_TX;
3196: e9 27 fe ff ff jmpq 2fc2 <vxlan_newlink+0x122>
319b: 83 4c 24 60 02 orl $0x2,0x60(%rsp)
if (data[IFLA_VXLAN_LIMIT])
conf.addrmax = nla_get_u32(data[IFLA_VXLAN_LIMIT]);
if (data[IFLA_VXLAN_COLLECT_METADATA] &&
nla_get_u8(data[IFLA_VXLAN_COLLECT_METADATA]))
conf.flags |= VXLAN_F_COLLECT_METADATA;
31a0: e9 0a fe ff ff jmpq 2faf <vxlan_newlink+0x10f>
31a5: 48 8b b3 88 00 00 00 mov 0x88(%rbx),%rsi
31ac: 48 85 f6 test %rsi,%rsi
if (data[IFLA_VXLAN_L2MISS] && nla_get_u8(data[IFLA_VXLAN_L2MISS]))
conf.flags |= VXLAN_F_L2MISS;
if (data[IFLA_VXLAN_L3MISS] && nla_get_u8(data[IFLA_VXLAN_L3MISS]))
conf.flags |= VXLAN_F_L3MISS;
31af: 0f 84 75 fd ff ff je 2f2a <vxlan_newlink+0x8a>
31b5: ba 10 00 00 00 mov $0x10,%edx
if (data[IFLA_VXLAN_RSC] && nla_get_u8(data[IFLA_VXLAN_RSC]))
conf.flags |= VXLAN_F_RSC;
if (data[IFLA_VXLAN_L2MISS] && nla_get_u8(data[IFLA_VXLAN_L2MISS]))
conf.flags |= VXLAN_F_L2MISS;
31ba: 48 89 e7 mov %rsp,%rdi
31bd: e8 00 00 00 00 callq 31c2 <vxlan_newlink+0x322>
if (data[IFLA_VXLAN_PROXY] && nla_get_u8(data[IFLA_VXLAN_PROXY]))
conf.flags |= VXLAN_F_PROXY;
if (data[IFLA_VXLAN_RSC] && nla_get_u8(data[IFLA_VXLAN_RSC]))
conf.flags |= VXLAN_F_RSC;
31c2: 48 8b 04 24 mov (%rsp),%rax
31c6: 48 8b 54 24 08 mov 0x8(%rsp),%rdx
if (data[IFLA_VXLAN_AGEING])
conf.age_interval = nla_get_u32(data[IFLA_VXLAN_AGEING]);
if (data[IFLA_VXLAN_PROXY] && nla_get_u8(data[IFLA_VXLAN_PROXY]))
conf.flags |= VXLAN_F_PROXY;
31cb: 48 89 44 24 34 mov %rax,0x34(%rsp)
31d0: b8 0a 00 00 00 mov $0xa,%eax
}
if (data[IFLA_VXLAN_LOCAL]) {
conf.saddr.sin.sin_addr.s_addr = nla_get_in_addr(data[IFLA_VXLAN_LOCAL]);
conf.saddr.sa.sa_family = AF_INET;
} else if (data[IFLA_VXLAN_LOCAL6]) {
31d5: 48 89 54 24 3c mov %rdx,0x3c(%rsp)
31da: 66 89 44 24 2c mov %ax,0x2c(%rsp)
31df: e9 46 fd ff ff jmpq 2f2a <vxlan_newlink+0x8a>
31e4: 48 8b b3 80 00 00 00 mov 0x80(%rbx),%rsi
*/
static inline struct in6_addr nla_get_in6_addr(const struct nlattr *nla)
{
struct in6_addr tmp;
nla_memcpy(&tmp, nla, sizeof(tmp));
31eb: 48 85 f6 test %rsi,%rsi
31ee: 0f 84 18 fd ff ff je 2f0c <vxlan_newlink+0x6c>
return tmp;
31f4: ba 10 00 00 00 mov $0x10,%edx
31f9: 48 89 e7 mov %rsp,%rdi
if (!IS_ENABLED(CONFIG_IPV6))
return -EPFNOSUPPORT;
/* TODO: respect scope id */
conf.saddr.sin6.sin6_addr = nla_get_in6_addr(data[IFLA_VXLAN_LOCAL6]);
31fc: e8 00 00 00 00 callq 3201 <vxlan_newlink+0x361>
conf.saddr.sa.sa_family = AF_INET6;
3201: 48 8b 04 24 mov (%rsp),%rax
} else if (data[IFLA_VXLAN_LOCAL6]) {
if (!IS_ENABLED(CONFIG_IPV6))
return -EPFNOSUPPORT;
/* TODO: respect scope id */
conf.saddr.sin6.sin6_addr = nla_get_in6_addr(data[IFLA_VXLAN_LOCAL6]);
3205: 48 8b 54 24 08 mov 0x8(%rsp),%rdx
conf.saddr.sa.sa_family = AF_INET6;
320a: b9 0a 00 00 00 mov $0xa,%ecx
320f: 66 89 4c 24 10 mov %cx,0x10(%rsp)
if (data[IFLA_VXLAN_ID])
conf.vni = cpu_to_be32(nla_get_u32(data[IFLA_VXLAN_ID]));
if (data[IFLA_VXLAN_GROUP]) {
conf.remote_ip.sin.sin_addr.s_addr = nla_get_in_addr(data[IFLA_VXLAN_GROUP]);
} else if (data[IFLA_VXLAN_GROUP6]) {
3214: 48 89 44 24 18 mov %rax,0x18(%rsp)
3219: 48 89 54 24 20 mov %rdx,0x20(%rsp)
321e: e9 e9 fc ff ff jmpq 2f0c <vxlan_newlink+0x6c>
3223: e8 00 00 00 00 callq 3228 <vxlan_newlink+0x388>
*/
static inline struct in6_addr nla_get_in6_addr(const struct nlattr *nla)
{
struct in6_addr tmp;
nla_memcpy(&tmp, nla, sizeof(tmp));
3228: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
322f: 00
0000000000003230 <vxlan_dev_create>:
3230: e8 00 00 00 00 callq 3235 <vxlan_dev_create+0x5>
return tmp;
3235: 55 push %rbp
3236: 0f b6 d2 movzbl %dl,%edx
3239: 48 89 e5 mov %rsp,%rbp
if (!IS_ENABLED(CONFIG_IPV6))
return -EPFNOSUPPORT;
conf.remote_ip.sin6.sin6_addr = nla_get_in6_addr(data[IFLA_VXLAN_GROUP6]);
conf.remote_ip.sa.sa_family = AF_INET6;
323c: 41 55 push %r13
323e: 41 54 push %r12
3240: 53 push %rbx
3241: 4c 8d 85 80 fe ff ff lea -0x180(%rbp),%r8
conf.remote_ip.sin.sin_addr.s_addr = nla_get_in_addr(data[IFLA_VXLAN_GROUP]);
} else if (data[IFLA_VXLAN_GROUP6]) {
if (!IS_ENABLED(CONFIG_IPV6))
return -EPFNOSUPPORT;
conf.remote_ip.sin6.sin6_addr = nla_get_in6_addr(data[IFLA_VXLAN_GROUP6]);
3248: 49 89 fd mov %rdi,%r13
324b: 49 89 cc mov %rcx,%r12
324e: b9 2c 00 00 00 mov $0x2c,%ecx
if (tb[IFLA_MTU])
conf.mtu = nla_get_u32(tb[IFLA_MTU]);
return vxlan_dev_configure(src_net, dev, &conf);
}
3253: 48 81 ec 80 01 00 00 sub $0x180,%rsp
325a: 4c 89 c7 mov %r8,%rdi
325d: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
3264: 00 00
struct net_device *dev;
int err;
memset(&tb, 0, sizeof(tb));
dev = rtnl_create_link(net, name, name_assign_type,
3266: 48 89 45 e0 mov %rax,-0x20(%rbp)
};
struct net_device *vxlan_dev_create(struct net *net, const char *name,
u8 name_assign_type,
struct vxlan_config *conf)
{
326a: 31 c0 xor %eax,%eax
326c: f3 48 ab rep stos %rax,%es:(%rdi)
326f: 48 c7 c1 00 00 00 00 mov $0x0,%rcx
struct nlattr *tb[IFLA_MAX + 1];
struct net_device *dev;
int err;
memset(&tb, 0, sizeof(tb));
3276: 4c 89 ef mov %r13,%rdi
};
struct net_device *vxlan_dev_create(struct net *net, const char *name,
u8 name_assign_type,
struct vxlan_config *conf)
{
3279: e8 00 00 00 00 callq 327e <vxlan_dev_create+0x4e>
struct nlattr *tb[IFLA_MAX + 1];
struct net_device *dev;
int err;
memset(&tb, 0, sizeof(tb));
327e: 48 3d 00 f0 ff ff cmp $0xfffffffffffff000,%rax
};
struct net_device *vxlan_dev_create(struct net *net, const char *name,
u8 name_assign_type,
struct vxlan_config *conf)
{
3284: 48 89 c3 mov %rax,%rbx
3287: 0f 87 8e 00 00 00 ja 331b <vxlan_dev_create+0xeb>
328d: 4c 89 e2 mov %r12,%rdx
3290: 48 89 c6 mov %rax,%rsi
3293: 4c 89 ef mov %r13,%rdi
3296: e8 65 f6 ff ff callq 2900 <vxlan_dev_configure>
329b: 85 c0 test %eax,%eax
struct nlattr *tb[IFLA_MAX + 1];
struct net_device *dev;
int err;
memset(&tb, 0, sizeof(tb));
329d: 78 31 js 32d0 <vxlan_dev_create+0xa0>
dev = rtnl_create_link(net, name, name_assign_type,
329f: 31 f6 xor %esi,%esi
32a1: 48 89 df mov %rbx,%rdi
32a4: e8 00 00 00 00 callq 32a9 <vxlan_dev_create+0x79>
32a9: 85 c0 test %eax,%eax
32ab: 48 89 df mov %rbx,%rdi
&vxlan_link_ops, tb);
if (IS_ERR(dev))
32ae: 78 37 js 32e7 <vxlan_dev_create+0xb7>
32b0: 48 8b 4d e0 mov -0x20(%rbp),%rcx
struct net_device *dev;
int err;
memset(&tb, 0, sizeof(tb));
dev = rtnl_create_link(net, name, name_assign_type,
32b4: 65 48 33 0c 25 28 00 xor %gs:0x28,%rcx
32bb: 00 00
&vxlan_link_ops, tb);
if (IS_ERR(dev))
return dev;
err = vxlan_dev_configure(net, dev, conf);
32bd: 48 89 f8 mov %rdi,%rax
32c0: 75 5e jne 3320 <vxlan_dev_create+0xf0>
32c2: 48 81 c4 80 01 00 00 add $0x180,%rsp
32c9: 5b pop %rbx
32ca: 41 5c pop %r12
if (err < 0) {
32cc: 41 5d pop %r13
32ce: 5d pop %rbp
free_netdev(dev);
return ERR_PTR(err);
}
err = rtnl_configure_link(dev, NULL);
32cf: c3 retq
32d0: 48 89 df mov %rbx,%rdi
32d3: 89 85 6c fe ff ff mov %eax,-0x194(%rbp)
if (err < 0) {
32d9: e8 00 00 00 00 callq 32de <vxlan_dev_create+0xae>
32de: 48 63 bd 6c fe ff ff movslq -0x194(%rbp),%rdi
unregister_netdevice_many(&list_kill);
return ERR_PTR(err);
}
return dev;
}
32e5: eb c9 jmp 32b0 <vxlan_dev_create+0x80>
32e7: 4c 8d a5 70 fe ff ff lea -0x190(%rbp),%r12
32ee: 89 85 6c fe ff ff mov %eax,-0x194(%rbp)
32f4: 4c 89 e6 mov %r12,%rsi
32f7: 4c 89 a5 70 fe ff ff mov %r12,-0x190(%rbp)
32fe: 4c 89 a5 78 fe ff ff mov %r12,-0x188(%rbp)
if (IS_ERR(dev))
return dev;
err = vxlan_dev_configure(net, dev, conf);
if (err < 0) {
free_netdev(dev);
3305: e8 f6 d5 ff ff callq 900 <vxlan_dellink>
330a: 4c 89 e7 mov %r12,%rdi
330d: e8 00 00 00 00 callq 3312 <vxlan_dev_create+0xe2>
3312: 48 63 bd 6c fe ff ff movslq -0x194(%rbp),%rdi
return ERR_PTR(err);
}
err = rtnl_configure_link(dev, NULL);
if (err < 0) {
LIST_HEAD(list_kill);
3319: eb 95 jmp 32b0 <vxlan_dev_create+0x80>
331b: 48 89 c7 mov %rax,%rdi
331e: eb 90 jmp 32b0 <vxlan_dev_create+0x80>
3320: e8 00 00 00 00 callq 3325 <vxlan_dev_create+0xf5>
vxlan_dellink(dev, &list_kill);
3325: 90 nop
3326: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
332d: 00 00 00
0000000000003330 <vxlan_snoop>:
return ERR_PTR(err);
}
err = rtnl_configure_link(dev, NULL);
if (err < 0) {
LIST_HEAD(list_kill);
3330: e8 00 00 00 00 callq 3335 <vxlan_snoop+0x5>
vxlan_dellink(dev, &list_kill);
3335: 55 push %rbp
3336: 48 89 e5 mov %rsp,%rbp
3339: 41 57 push %r15
unregister_netdevice_many(&list_kill);
333b: 41 56 push %r14
333d: 4c 8d b7 40 08 00 00 lea 0x840(%rdi),%r14
3344: 41 55 push %r13
3346: 41 54 push %r12
3348: 53 push %rbx
3349: 49 89 fd mov %rdi,%r13
memset(&tb, 0, sizeof(tb));
dev = rtnl_create_link(net, name, name_assign_type,
&vxlan_link_ops, tb);
if (IS_ERR(dev))
return dev;
334c: 49 89 f4 mov %rsi,%r12
334f: 4c 89 f7 mov %r14,%rdi
unregister_netdevice_many(&list_kill);
return ERR_PTR(err);
}
return dev;
}
3352: 48 89 d6 mov %rdx,%rsi
3355: 49 89 d7 mov %rdx,%r15
3358: 48 83 ec 28 sub $0x28,%rsp
335c: e8 9f cc ff ff callq 0 <__vxlan_find_mac>
* and Tunnel endpoint.
* Return true if packet is bogus and should be dropped.
*/
static bool vxlan_snoop(struct net_device *dev,
union vxlan_addr *src_ip, const u8 *src_mac)
{
3361: 48 85 c0 test %rax,%rax
3364: 74 7d je 33e3 <vxlan_snoop+0xb3>
3366: 48 89 c3 mov %rax,%rbx
3369: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 3370 <vxlan_snoop+0x40>
3370: 4c 8b 4b 30 mov 0x30(%rbx),%r9
3374: 48 89 43 28 mov %rax,0x28(%rbx)
3378: 41 0f b7 41 d8 movzwl -0x28(%r9),%eax
337d: 66 41 3b 04 24 cmp (%r12),%ax
static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
3382: 74 1e je 33a2 <vxlan_snoop+0x72>
3384: f6 43 46 40 testb $0x40,0x46(%rbx)
* and Tunnel endpoint.
* Return true if packet is bogus and should be dropped.
*/
static bool vxlan_snoop(struct net_device *dev,
union vxlan_addr *src_ip, const u8 *src_mac)
{
3388: b8 01 00 00 00 mov $0x1,%eax
static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
338d: 0f 84 86 00 00 00 je 3419 <vxlan_snoop+0xe9>
if (f)
3393: 48 83 c4 28 add $0x28,%rsp
3397: 5b pop %rbx
3398: 41 5c pop %r12
f->used = jiffies;
339a: 41 5d pop %r13
339c: 41 5e pop %r14
339e: 41 5f pop %r15
33a0: 5d pop %rbp
33a1: c3 retq
33a2: 66 83 f8 0a cmp $0xa,%ax
33a6: 74 21 je 33c9 <vxlan_snoop+0x99>
#if IS_ENABLED(CONFIG_IPV6)
static inline
bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
{
if (a->sa.sa_family != b->sa.sa_family)
33a8: 41 8b 44 24 04 mov 0x4(%r12),%eax
33ad: 41 39 41 dc cmp %eax,-0x24(%r9)
33b1: 0f 94 c0 sete %al
if (likely(vxlan_addr_equal(&rdst->remote_ip, src_ip)))
return false;
/* Don't migrate static entries, drop packets */
if (f->state & NUD_NOARP)
33b4: 84 c0 test %al,%al
33b6: 74 cc je 3384 <vxlan_snoop+0x54>
return true;
33b8: 48 83 c4 28 add $0x28,%rsp
33bc: 31 c0 xor %eax,%eax
if (likely(vxlan_addr_equal(&rdst->remote_ip, src_ip)))
return false;
/* Don't migrate static entries, drop packets */
if (f->state & NUD_NOARP)
33be: 5b pop %rbx
33bf: 41 5c pop %r12
33c1: 41 5d pop %r13
0, NTF_SELF);
spin_unlock(&vxlan->hash_lock);
}
return false;
}
33c3: 41 5e pop %r14
33c5: 41 5f pop %r15
33c7: 5d pop %rbp
33c8: c3 retq
33c9: 49 8b 51 e0 mov -0x20(%r9),%rdx
33cd: 49 8b 41 e8 mov -0x18(%r9),%rax
33d1: 49 33 54 24 08 xor 0x8(%r12),%rdx
static inline
bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
{
if (a->sa.sa_family != b->sa.sa_family)
return false;
if (a->sa.sa_family == AF_INET6)
33d6: 49 33 44 24 10 xor 0x10(%r12),%rax
return ipv6_addr_equal(&a->sin6.sin6_addr, &b->sin6.sin6_addr);
else
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
33db: 48 09 c2 or %rax,%rdx
33de: 0f 94 c0 sete %al
33e1: eb d1 jmp 33b4 <vxlan_snoop+0x84>
33e3: 49 8d 9d 28 09 00 00 lea 0x928(%r13),%rbx
0, NTF_SELF);
spin_unlock(&vxlan->hash_lock);
}
return false;
}
33ea: 48 89 df mov %rbx,%rdi
f = vxlan_find_mac(vxlan, src_mac);
if (likely(f)) {
struct vxlan_rdst *rdst = first_remote_rcu(f);
if (likely(vxlan_addr_equal(&rdst->remote_ip, src_ip)))
return false;
33ed: e8 00 00 00 00 callq 33f2 <vxlan_snoop+0xc2>
0, NTF_SELF);
spin_unlock(&vxlan->hash_lock);
}
return false;
}
33f2: 49 8b 45 48 mov 0x48(%r13),%rax
33f6: a8 01 test $0x1,%al
33f8: 0f 85 a0 00 00 00 jne 349e <vxlan_snoop+0x16e>
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
const unsigned long *ul1 = (const unsigned long *)a1;
const unsigned long *ul2 = (const unsigned long *)a2;
return ((ul1[0] ^ ul2[0]) | (ul1[1] ^ ul2[1])) == 0UL;
33fe: 48 89 df mov %rbx,%rdi
3401: ff 14 25 00 00 00 00 callq *0x0
3408: 48 83 c4 28 add $0x28,%rsp
340c: 31 c0 xor %eax,%eax
340e: 5b pop %rbx
340f: 41 5c pop %r12
3411: 41 5d pop %r13
raw_spin_lock_init(&(_lock)->rlock); \
} while (0)
static __always_inline void spin_lock(spinlock_t *lock)
{
raw_spin_lock(&lock->rlock);
3413: 41 5e pop %r14
3415: 41 5f pop %r15
3417: 5d pop %rbp
3418: c3 retq
3419: 4c 89 4d d0 mov %r9,-0x30(%rbp)
341d: e8 00 00 00 00 callq 3422 <vxlan_snoop+0xf2>
3422: 4c 8b 4d d0 mov -0x30(%rbp),%r9
} else {
/* learned new entry */
spin_lock(&vxlan->hash_lock);
/* close off race between vxlan_flush and incoming packets */
if (netif_running(dev))
3426: 85 c0 test %eax,%eax
3428: 4d 8d 51 d8 lea -0x28(%r9),%r10
342c: 74 28 je 3456 <vxlan_snoop+0x126>
342e: 4c 89 d1 mov %r10,%rcx
3431: 4d 89 e0 mov %r12,%r8
3434: 4c 89 fa mov %r15,%rdx
3437: 48 c7 c6 00 00 00 00 mov $0x0,%rsi
0, NTF_SELF);
spin_unlock(&vxlan->hash_lock);
}
return false;
}
343e: 4c 89 ef mov %r13,%rdi
3441: 4c 89 4d c8 mov %r9,-0x38(%rbp)
3445: 4c 89 55 d0 mov %r10,-0x30(%rbp)
3449: e8 00 00 00 00 callq 344e <vxlan_snoop+0x11e>
/* Don't migrate static entries, drop packets */
if (f->state & NUD_NOARP)
return true;
if (net_ratelimit())
344e: 4c 8b 4d c8 mov -0x38(%rbp),%r9
3452: 4c 8b 55 d0 mov -0x30(%rbp),%r10
3456: 49 8b 04 24 mov (%r12),%rax
345a: b9 1c 00 00 00 mov $0x1c,%ecx
netdev_info(dev,
345f: 4c 89 d2 mov %r10,%rdx
3462: 48 89 de mov %rbx,%rsi
3465: 4c 89 f7 mov %r14,%rdi
3468: 49 89 41 d8 mov %rax,-0x28(%r9)
346c: 49 8b 44 24 08 mov 0x8(%r12),%rax
3471: 49 89 41 e0 mov %rax,-0x20(%r9)
3475: 49 8b 44 24 10 mov 0x10(%r12),%rax
347a: 49 89 41 e8 mov %rax,-0x18(%r9)
347e: 41 8b 44 24 18 mov 0x18(%r12),%eax
3483: 41 89 41 f0 mov %eax,-0x10(%r9)
"%pM migrated from %pIS to %pIS\n",
src_mac, &rdst->remote_ip.sa, &src_ip->sa);
rdst->remote_ip = *src_ip;
3487: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 348e <vxlan_snoop+0x15e>
f->updated = jiffies;
vxlan_fdb_notify(vxlan, f, rdst, RTM_NEWNEIGH);
348e: 48 89 43 20 mov %rax,0x20(%rbx)
3492: e8 a9 e8 ff ff callq 1d40 <vxlan_fdb_notify>
3497: 31 c0 xor %eax,%eax
if (net_ratelimit())
netdev_info(dev,
"%pM migrated from %pIS to %pIS\n",
src_mac, &rdst->remote_ip.sa, &src_ip->sa);
rdst->remote_ip = *src_ip;
3499: e9 f5 fe ff ff jmpq 3393 <vxlan_snoop+0x63>
349e: 45 0f b7 8d 7c 09 00 movzwl 0x97c(%r13),%r9d
34a5: 00
34a6: c7 44 24 10 02 00 00 movl $0x2,0x10(%rsp)
34ad: 00
34ae: 41 b8 00 06 00 00 mov $0x600,%r8d
34b4: c7 44 24 08 00 00 00 movl $0x0,0x8(%rsp)
34bb: 00
f->updated = jiffies;
34bc: 41 8b 85 a0 08 00 00 mov 0x8a0(%r13),%eax
vxlan_fdb_notify(vxlan, f, rdst, RTM_NEWNEIGH);
34c3: b9 02 00 00 00 mov $0x2,%ecx
vxlan->default_dst.remote_vni,
0, NTF_SELF);
spin_unlock(&vxlan->hash_lock);
}
return false;
34c8: 4c 89 e2 mov %r12,%rdx
34cb: 4c 89 fe mov %r15,%rsi
/* learned new entry */
spin_lock(&vxlan->hash_lock);
/* close off race between vxlan_flush and incoming packets */
if (netif_running(dev))
vxlan_fdb_create(vxlan, src_mac, src_ip,
34ce: 4c 89 f7 mov %r14,%rdi
34d1: 89 04 24 mov %eax,(%rsp)
34d4: e8 b7 f0 ff ff callq 2590 <vxlan_fdb_create>
34d9: e9 20 ff ff ff jmpq 33fe <vxlan_snoop+0xce>
34de: 66 90 xchg %ax,%ax
00000000000034e0 <vxlan_fdb_add>:
34e0: e8 00 00 00 00 callq 34e5 <vxlan_fdb_add+0x5>
34e5: 55 push %rbp
34e6: 48 89 e5 mov %rsp,%rbp
34e9: 41 57 push %r15
34eb: 41 56 push %r14
34ed: 41 55 push %r13
34ef: 41 54 push %r12
34f1: 49 89 ff mov %rdi,%r15
34f4: 53 push %rbx
34f5: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
34f9: 48 83 ec 58 sub $0x58,%rsp
34fd: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
3504: 00 00
3506: 48 89 44 24 50 mov %rax,0x50(%rsp)
350b: 31 c0 xor %eax,%eax
350d: 0f b7 47 08 movzwl 0x8(%rdi),%eax
/* Add static entry (via netlink) */
static int vxlan_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
struct net_device *dev,
const unsigned char *addr, u16 vid, u16 flags)
{
3511: a8 82 test $0x82,%al
3513: 0f 84 cb 00 00 00 je 35e4 <vxlan_fdb_add+0x104>
3519: 48 83 7e 08 00 cmpq $0x0,0x8(%rsi)
351e: 48 89 f7 mov %rsi,%rdi
3521: 0f 84 b6 00 00 00 je 35dd <vxlan_fdb_add+0xfd>
3527: 4c 8d a2 40 08 00 00 lea 0x840(%rdx),%r12
352e: 48 89 d3 mov %rdx,%rbx
3531: 49 89 cd mov %rcx,%r13
3534: 45 89 ce mov %r9d,%r14d
3537: 4c 8d 44 24 20 lea 0x20(%rsp),%r8
353c: 4c 8d 4c 24 24 lea 0x24(%rsp),%r9
__be16 port;
__be32 vni;
u32 ifindex;
int err;
if (!(ndm->ndm_state & (NUD_PERMANENT|NUD_REACHABLE))) {
3541: 48 8d 4c 24 1e lea 0x1e(%rsp),%rcx
3546: 48 8d 54 24 28 lea 0x28(%rsp),%rdx
pr_info("RTM_NEWNEIGH with invalid state %#x\n",
ndm->ndm_state);
return -EINVAL;
}
if (tb[NDA_DST] == NULL)
354b: 4c 89 e6 mov %r12,%rsi
354e: e8 cd d0 ff ff callq 620 <vxlan_fdb_parse>
3553: 85 c0 test %eax,%eax
3555: 75 67 jne 35be <vxlan_fdb_add+0xde>
3557: 0f b7 74 24 28 movzwl 0x28(%rsp),%esi
355c: b8 9f ff ff ff mov $0xffffff9f,%eax
3561: 66 39 b3 80 08 00 00 cmp %si,0x880(%rbx)
return -EINVAL;
err = vxlan_fdb_parse(tb, vxlan, &ip, &port, &vni, &ifindex);
3568: 75 54 jne 35be <vxlan_fdb_add+0xde>
356a: 48 81 c3 28 09 00 00 add $0x928,%rbx
3571: 48 89 df mov %rbx,%rdi
3574: e8 00 00 00 00 callq 3579 <vxlan_fdb_add+0x99>
3579: 41 0f b6 47 0a movzbl 0xa(%r15),%eax
357e: 44 0f b7 4c 24 1e movzwl 0x1e(%rsp),%r9d
if (err)
3584: 48 8d 54 24 28 lea 0x28(%rsp),%rdx
return err;
if (vxlan->default_dst.remote_ip.sa.sa_family != ip.sa.sa_family)
3589: 41 0f b7 4f 08 movzwl 0x8(%r15),%ecx
return -EAFNOSUPPORT;
358e: 4c 89 e7 mov %r12,%rdi
err = vxlan_fdb_parse(tb, vxlan, &ip, &port, &vni, &ifindex);
if (err)
return err;
if (vxlan->default_dst.remote_ip.sa.sa_family != ip.sa.sa_family)
3591: 45 0f b7 c6 movzwl %r14w,%r8d
3595: 4c 89 ee mov %r13,%rsi
3598: 89 44 24 10 mov %eax,0x10(%rsp)
}
static __always_inline void spin_lock_bh(spinlock_t *lock)
{
raw_spin_lock_bh(&lock->rlock);
359c: 8b 44 24 24 mov 0x24(%rsp),%eax
35a0: 89 44 24 08 mov %eax,0x8(%rsp)
35a4: 8b 44 24 20 mov 0x20(%rsp),%eax
35a8: 89 04 24 mov %eax,(%rsp)
return -EAFNOSUPPORT;
spin_lock_bh(&vxlan->hash_lock);
err = vxlan_fdb_create(vxlan, addr, &ip, ndm->ndm_state, flags,
35ab: e8 e0 ef ff ff callq 2590 <vxlan_fdb_create>
35b0: 48 89 df mov %rbx,%rdi
35b3: 41 89 c4 mov %eax,%r12d
35b6: e8 00 00 00 00 callq 35bb <vxlan_fdb_add+0xdb>
35bb: 44 89 e0 mov %r12d,%eax
35be: 48 8b 74 24 50 mov 0x50(%rsp),%rsi
35c3: 65 48 33 34 25 28 00 xor %gs:0x28,%rsi
35ca: 00 00
35cc: 75 2c jne 35fa <vxlan_fdb_add+0x11a>
35ce: 48 8d 65 d8 lea -0x28(%rbp),%rsp
35d2: 5b pop %rbx
35d3: 41 5c pop %r12
35d5: 41 5d pop %r13
35d7: 41 5e pop %r14
35d9: 41 5f pop %r15
35db: 5d pop %rbp
35dc: c3 retq
35dd: b8 ea ff ff ff mov $0xffffffea,%eax
raw_spin_unlock(&lock->rlock);
}
static __always_inline void spin_unlock_bh(spinlock_t *lock)
{
raw_spin_unlock_bh(&lock->rlock);
35e2: eb da jmp 35be <vxlan_fdb_add+0xde>
35e4: 0f b7 f0 movzwl %ax,%esi
35e7: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
port, vni, ifindex, ndm->ndm_flags);
spin_unlock_bh(&vxlan->hash_lock);
return err;
}
35ee: e8 00 00 00 00 callq 35f3 <vxlan_fdb_add+0x113>
35f3: b8 ea ff ff ff mov $0xffffffea,%eax
35f8: eb c4 jmp 35be <vxlan_fdb_add+0xde>
35fa: e8 00 00 00 00 callq 35ff <vxlan_fdb_add+0x11f>
35ff: 90 nop
0000000000003600 <vxlan_get_route>:
3600: e8 00 00 00 00 callq 3605 <vxlan_get_route+0x5>
3605: 55 push %rbp
3606: 48 89 e5 mov %rsp,%rbp
3609: 41 57 push %r15
360b: 41 56 push %r14
ndm->ndm_state);
return -EINVAL;
}
if (tb[NDA_DST] == NULL)
return -EINVAL;
360d: 41 55 push %r13
360f: 41 54 push %r12
3611: 4d 89 cc mov %r9,%r12
__be32 vni;
u32 ifindex;
int err;
if (!(ndm->ndm_state & (NUD_PERMANENT|NUD_REACHABLE))) {
pr_info("RTM_NEWNEIGH with invalid state %#x\n",
3614: 53 push %rbx
3615: 89 cb mov %ecx,%ebx
3617: 49 89 ff mov %rdi,%r15
361a: 48 83 ec 48 sub $0x48,%rsp
361e: 44 8b 8e b4 00 00 00 mov 0xb4(%rsi),%r9d
ndm->ndm_state);
return -EINVAL;
3625: 4c 8b 6d 10 mov 0x10(%rbp),%r13
3629: 65 48 8b 0c 25 28 00 mov %gs:0x28,%rcx
3630: 00 00
static struct rtable *vxlan_get_route(struct vxlan_dev *vxlan,
struct sk_buff *skb, int oif, u8 tos,
__be32 daddr, __be32 *saddr,
struct dst_cache *dst_cache,
const struct ip_tunnel_info *info)
{
3632: 48 89 4d d0 mov %rcx,-0x30(%rbp)
3636: 31 c9 xor %ecx,%ecx
3638: 48 8b 45 18 mov 0x18(%rbp),%rax
363c: 45 85 c9 test %r9d,%r9d
363f: 75 56 jne 3697 <vxlan_get_route+0x97>
3641: 48 85 c0 test %rax,%rax
3644: 49 89 f6 mov %rsi,%r14
3647: 74 4a je 3693 <vxlan_get_route+0x93>
3649: f6 40 28 20 testb $0x20,0x28(%rax)
364d: 75 48 jne 3697 <vxlan_get_route+0x97>
364f: 4c 89 e6 mov %r12,%rsi
3652: 4c 89 ef mov %r13,%rdi
3655: 44 89 45 94 mov %r8d,-0x6c(%rbp)
3659: 89 55 98 mov %edx,-0x68(%rbp)
365c: e8 00 00 00 00 callq 3661 <vxlan_get_route+0x61>
3661: 48 85 c0 test %rax,%rax
3664: 8b 55 98 mov -0x68(%rbp),%edx
3667: 44 8b 45 94 mov -0x6c(%rbp),%r8d
366b: 0f 84 8d 00 00 00 je 36fe <vxlan_get_route+0xfe>
ip_tunnel_dst_cache_usable(const struct sk_buff *skb,
const struct ip_tunnel_info *info)
{
if (skb->mark)
return false;
if (!info)
3671: 48 8b 5d d0 mov -0x30(%rbp),%rbx
3675: 65 48 33 1c 25 28 00 xor %gs:0x28,%rbx
367c: 00 00
return true;
if (info->key.tun_flags & TUNNEL_NOCACHE)
367e: 0f 85 89 00 00 00 jne 370d <vxlan_get_route+0x10d>
struct flowi4 fl4;
if (tos && !info)
use_cache = false;
if (use_cache) {
rt = dst_cache_get_ip4(dst_cache, saddr);
3684: 48 83 c4 48 add $0x48,%rsp
3688: 5b pop %rbx
3689: 41 5c pop %r12
368b: 41 5d pop %r13
368d: 41 5e pop %r14
368f: 41 5f pop %r15
if (rt)
3691: 5d pop %rbp
3692: c3 retq
3693: 84 db test %bl,%bl
3695: 74 b8 je 364f <vxlan_get_route+0x4f>
3697: 45 31 f6 xor %r14d,%r14d
369a: 48 8d 75 a0 lea -0x60(%rbp),%rsi
369e: 31 c0 xor %eax,%eax
36a0: b9 06 00 00 00 mov $0x6,%ecx
*saddr = fl4.saddr;
if (use_cache)
dst_cache_set_ip4(dst_cache, &rt->dst, fl4.saddr);
}
return rt;
}
36a5: 83 e3 1e and $0x1e,%ebx
36a8: 48 89 f7 mov %rsi,%rdi
36ab: f3 48 ab rep stos %rax,%es:(%rdi)
36ae: 41 8b 04 24 mov (%r12),%eax
36b2: 49 8b 7f 38 mov 0x38(%r15),%rdi
36b6: 89 55 a0 mov %edx,-0x60(%rbp)
36b9: 31 d2 xor %edx,%edx
36bb: 88 5d ac mov %bl,-0x54(%rbp)
36be: 44 89 4d a8 mov %r9d,-0x58(%rbp)
36c2: c6 45 ae 11 movb $0x11,-0x52(%rbp)
{
bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
struct rtable *rt = NULL;
struct flowi4 fl4;
if (tos && !info)
36c6: 44 89 45 c4 mov %r8d,-0x3c(%rbp)
rt = dst_cache_get_ip4(dst_cache, saddr);
if (rt)
return rt;
}
memset(&fl4, 0, sizeof(fl4));
36ca: 89 45 c0 mov %eax,-0x40(%rbp)
36cd: e8 00 00 00 00 callq 36d2 <vxlan_get_route+0xd2>
36d2: 48 3d 00 f0 ff ff cmp $0xfffffffffffff000,%rax
36d8: 77 97 ja 3671 <vxlan_get_route+0x71>
36da: 8b 55 c0 mov -0x40(%rbp),%edx
36dd: 45 84 f6 test %r14b,%r14b
fl4.flowi4_oif = oif;
fl4.flowi4_tos = RT_TOS(tos);
fl4.flowi4_mark = skb->mark;
fl4.flowi4_proto = IPPROTO_UDP;
fl4.daddr = daddr;
fl4.saddr = *saddr;
36e0: 41 89 14 24 mov %edx,(%r12)
struct dst_entry *ipv4_blackhole_route(struct net *net,
struct dst_entry *dst_orig);
static inline struct rtable *ip_route_output_key(struct net *net, struct flowi4 *flp)
{
return ip_route_output_flow(net, flp, NULL);
36e4: 74 8b je 3671 <vxlan_get_route+0x71>
if (rt)
return rt;
}
memset(&fl4, 0, sizeof(fl4));
fl4.flowi4_oif = oif;
36e6: 48 89 c6 mov %rax,%rsi
36e9: 4c 89 ef mov %r13,%rdi
fl4.flowi4_tos = RT_TOS(tos);
36ec: 48 89 45 98 mov %rax,-0x68(%rbp)
fl4.flowi4_mark = skb->mark;
36f0: e8 00 00 00 00 callq 36f5 <vxlan_get_route+0xf5>
fl4.flowi4_proto = IPPROTO_UDP;
36f5: 48 8b 45 98 mov -0x68(%rbp),%rax
fl4.daddr = daddr;
36f9: e9 73 ff ff ff jmpq 3671 <vxlan_get_route+0x71>
36fe: 45 8b 8e b4 00 00 00 mov 0xb4(%r14),%r9d
fl4.saddr = *saddr;
rt = ip_route_output_key(vxlan->net, &fl4);
if (!IS_ERR(rt)) {
3705: 41 be 01 00 00 00 mov $0x1,%r14d
*saddr = fl4.saddr;
370b: eb 8d jmp 369a <vxlan_get_route+0x9a>
if (use_cache)
370d: e8 00 00 00 00 callq 3712 <vxlan_get_route+0x112>
fl4.daddr = daddr;
fl4.saddr = *saddr;
rt = ip_route_output_key(vxlan->net, &fl4);
if (!IS_ERR(rt)) {
*saddr = fl4.saddr;
3712: 0f 1f 40 00 nopl 0x0(%rax)
if (use_cache)
dst_cache_set_ip4(dst_cache, &rt->dst, fl4.saddr);
3716: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
371d: 00 00 00
0000000000003720 <vxlan_stop>:
3720: e8 00 00 00 00 callq 3725 <vxlan_stop+0x5>
3725: 55 push %rbp
3726: 48 89 e5 mov %rsp,%rbp
3729: 41 57 push %r15
372b: 41 56 push %r14
372d: 41 55 push %r13
372f: 41 54 push %r12
3731: 49 89 fe mov %rdi,%r14
3734: 53 push %rbx
if (tos && !info)
use_cache = false;
if (use_cache) {
rt = dst_cache_get_ip4(dst_cache, saddr);
if (rt)
3735: 4c 8d a7 40 08 00 00 lea 0x840(%rdi),%r12
373c: 48 83 ec 28 sub $0x28,%rsp
*saddr = fl4.saddr;
if (use_cache)
dst_cache_set_ip4(dst_cache, &rt->dst, fl4.saddr);
}
return rt;
}
3740: 0f b7 b7 80 08 00 00 movzwl 0x880(%rdi),%esi
3747: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
374e: 00 00
spin_unlock_bh(&vxlan->hash_lock);
}
/* Cleanup timer and forwarding table on shutdown */
static int vxlan_stop(struct net_device *dev)
{
3750: 48 89 45 d0 mov %rax,-0x30(%rbp)
3754: 31 c0 xor %eax,%eax
3756: 48 8b 87 78 08 00 00 mov 0x878(%rdi),%rax
375d: 48 8b 90 88 14 00 00 mov 0x1488(%rax),%rdx
3764: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # 376a <vxlan_stop+0x4a>
376a: 83 e8 01 sub $0x1,%eax
376d: 66 83 fe 0a cmp $0xa,%si
return ipa->sin.sin_addr.s_addr == htonl(INADDR_ANY);
}
static inline bool vxlan_addr_multicast(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
3771: 48 98 cltq
3773: 48 8b 4c c2 18 mov 0x18(%rdx,%rax,8),%rcx
spin_unlock_bh(&vxlan->hash_lock);
}
/* Cleanup timer and forwarding table on shutdown */
static int vxlan_stop(struct net_device *dev)
{
3778: 0f 84 c4 00 00 00 je 3842 <vxlan_stop+0x122>
377e: 8b bf 84 08 00 00 mov 0x884(%rdi),%edi
3784: c7 45 bc 00 00 00 00 movl $0x0,-0x44(%rbp)
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
378b: 89 f8 mov %edi,%eax
378d: 25 f0 00 00 00 and $0xf0,%eax
3792: 3d e0 00 00 00 cmp $0xe0,%eax
3797: 0f 84 c3 00 00 00 je 3860 <vxlan_stop+0x140>
return ipa->sin.sin_addr.s_addr == htonl(INADDR_ANY);
}
static inline bool vxlan_addr_multicast(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
379d: 49 8d be e0 08 00 00 lea 0x8e0(%r14),%rdi
37a4: 49 8d 9e a0 09 00 00 lea 0x9a0(%r14),%rbx
37ab: 4d 8d ae a0 11 00 00 lea 0x11a0(%r14),%r13
return ipv6_addr_is_multicast(&ipa->sin6.sin6_addr);
else
return IN_MULTICAST(ntohl(ipa->sin.sin_addr.s_addr));
37b2: e8 00 00 00 00 callq 37b7 <vxlan_stop+0x97>
/* Cleanup timer and forwarding table on shutdown */
static int vxlan_stop(struct net_device *dev)
{
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
int ret = 0;
37b7: 49 8d 86 28 09 00 00 lea 0x928(%r14),%rax
if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip) &&
37be: 48 89 c7 mov %rax,%rdi
37c1: 48 89 45 b0 mov %rax,-0x50(%rbp)
37c5: e8 00 00 00 00 callq 37ca <vxlan_stop+0xaa>
37ca: 48 8b 33 mov (%rbx),%rsi
!vxlan_group_used(vn, vxlan))
ret = vxlan_igmp_leave(vxlan);
del_timer_sync(&vxlan->age_timer);
37cd: 48 85 f6 test %rsi,%rsi
37d0: 75 0a jne 37dc <vxlan_stop+0xbc>
37d2: eb 24 jmp 37f8 <vxlan_stop+0xd8>
37d4: 4d 85 ff test %r15,%r15
37d7: 4c 89 fe mov %r15,%rsi
37da: 74 1c je 37f8 <vxlan_stop+0xd8>
37dc: 0f b7 56 44 movzwl 0x44(%rsi),%edx
37e0: 4c 8b 3e mov (%rsi),%r15
37e3: 0b 56 40 or 0x40(%rsi),%edx
37e6: 74 ec je 37d4 <vxlan_stop+0xb4>
raw_spin_lock(&lock->rlock);
}
static __always_inline void spin_lock_bh(spinlock_t *lock)
{
raw_spin_lock_bh(&lock->rlock);
37e8: 4c 89 e7 mov %r12,%rdi
37eb: e8 40 e6 ff ff callq 1e30 <vxlan_fdb_destroy>
37f0: 4d 85 ff test %r15,%r15
37f3: 4c 89 fe mov %r15,%rsi
37f6: 75 e4 jne 37dc <vxlan_stop+0xbc>
37f8: 48 83 c3 08 add $0x8,%rbx
unsigned int h;
spin_lock_bh(&vxlan->hash_lock);
for (h = 0; h < FDB_HASH_SIZE; ++h) {
struct hlist_node *p, *n;
hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
37fc: 4c 39 eb cmp %r13,%rbx
37ff: 75 c9 jne 37ca <vxlan_stop+0xaa>
3801: 48 8b 7d b0 mov -0x50(%rbp),%rdi
3805: e8 00 00 00 00 callq 380a <vxlan_stop+0xea>
380a: 49 8d b6 68 08 00 00 lea 0x868(%r14),%rsi
3811: 49 8d be 60 08 00 00 lea 0x860(%r14),%rdi
struct vxlan_fdb *f
= container_of(p, struct vxlan_fdb, hlist);
/* the all_zeros_mac entry is deleted at vxlan_uninit */
if (!is_zero_ether_addr(f->eth_addr))
vxlan_fdb_destroy(vxlan, f);
3818: e8 73 d4 ff ff callq c90 <vxlan_sock_release.isra.44>
381d: 48 8b 4d d0 mov -0x30(%rbp),%rcx
unsigned int h;
spin_lock_bh(&vxlan->hash_lock);
for (h = 0; h < FDB_HASH_SIZE; ++h) {
struct hlist_node *p, *n;
hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
3821: 65 48 33 0c 25 28 00 xor %gs:0x28,%rcx
3828: 00 00
382a: 8b 45 bc mov -0x44(%rbp),%eax
static void vxlan_flush(struct vxlan_dev *vxlan)
{
unsigned int h;
spin_lock_bh(&vxlan->hash_lock);
for (h = 0; h < FDB_HASH_SIZE; ++h) {
382d: 0f 85 f0 01 00 00 jne 3a23 <vxlan_stop+0x303>
raw_spin_unlock(&lock->rlock);
}
static __always_inline void spin_unlock_bh(spinlock_t *lock)
{
raw_spin_unlock_bh(&lock->rlock);
3833: 48 83 c4 28 add $0x28,%rsp
3837: 5b pop %rbx
3838: 41 5c pop %r12
383a: 41 5d pop %r13
383c: 41 5e pop %r14
383e: 41 5f pop %r15
3840: 5d pop %rbp
3841: c3 retq
3842: 0f b6 87 88 08 00 00 movzbl 0x888(%rdi),%eax
ret = vxlan_igmp_leave(vxlan);
del_timer_sync(&vxlan->age_timer);
vxlan_flush(vxlan);
vxlan_sock_release(vxlan);
3849: 3d ff 00 00 00 cmp $0xff,%eax
return ret;
}
384e: 0f 84 92 01 00 00 je 39e6 <vxlan_stop+0x2c6>
3854: c7 45 bc 00 00 00 00 movl $0x0,-0x44(%rbp)
385b: e9 3d ff ff ff jmpq 379d <vxlan_stop+0x7d>
3860: 66 83 fe 02 cmp $0x2,%si
3864: 0f 84 12 01 00 00 je 397c <vxlan_stop+0x25c>
386a: 48 8b 11 mov (%rcx),%rdx
386d: 48 39 d1 cmp %rdx,%rcx
3870: 48 8d 42 f0 lea -0x10(%rdx),%rax
{
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
int ret = 0;
if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip) &&
3874: 0f 84 96 01 00 00 je 3a10 <vxlan_stop+0x2f0>
387a: 41 8b 9e a4 08 00 00 mov 0x8a4(%r14),%ebx
3881: eb 21 jmp 38a4 <vxlan_stop+0x184>
3883: 66 83 fe 0a cmp $0xa,%si
/* Cleanup timer and forwarding table on shutdown */
static int vxlan_stop(struct net_device *dev)
{
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
int ret = 0;
3887: 0f 84 92 00 00 00 je 391f <vxlan_stop+0x1ff>
388d: 66 3b 70 40 cmp 0x40(%rax),%si
unsigned short family = dev->default_dst.remote_ip.sa.sa_family;
/* The vxlan_sock is only used by dev, leaving group has
* no effect on other vxlan devices.
*/
if (family == AF_INET && dev->vn4_sock &&
3891: 0f 84 c2 00 00 00 je 3959 <vxlan_stop+0x239>
3897: 48 8b 50 10 mov 0x10(%rax),%rdx
if (family == AF_INET6 && dev->vn6_sock &&
atomic_read(&dev->vn6_sock->refcnt) == 1)
return false;
#endif
list_for_each_entry(vxlan, &vn->vxlan_list, next) {
389b: 48 39 d1 cmp %rdx,%rcx
389e: 48 8d 42 f0 lea -0x10(%rdx),%rax
38a2: 74 32 je 38d6 <vxlan_stop+0x1b6>
38a4: 48 8b 50 30 mov 0x30(%rax),%rdx
38a8: 48 8b 52 48 mov 0x48(%rdx),%rdx
38ac: 83 e2 01 and $0x1,%edx
38af: 74 e6 je 3897 <vxlan_stop+0x177>
38b1: 49 39 c4 cmp %rax,%r12
continue;
if (family == AF_INET && vxlan->vn4_sock != dev->vn4_sock)
continue;
#if IS_ENABLED(CONFIG_IPV6)
if (family == AF_INET6 && vxlan->vn6_sock != dev->vn6_sock)
38b4: 74 e1 je 3897 <vxlan_stop+0x177>
38b6: 66 83 fe 02 cmp $0x2,%si
38ba: 75 c7 jne 3883 <vxlan_stop+0x163>
38bc: 49 8b be 60 08 00 00 mov 0x860(%r14),%rdi
#if IS_ENABLED(CONFIG_IPV6)
static inline
bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
{
if (a->sa.sa_family != b->sa.sa_family)
38c3: 48 39 78 20 cmp %rdi,0x20(%rax)
if (family == AF_INET6 && dev->vn6_sock &&
atomic_read(&dev->vn6_sock->refcnt) == 1)
return false;
#endif
list_for_each_entry(vxlan, &vn->vxlan_list, next) {
38c7: 74 c4 je 388d <vxlan_stop+0x16d>
38c9: 48 8b 50 10 mov 0x10(%rax),%rdx
38cd: 48 39 d1 cmp %rdx,%rcx
38d0: 48 8d 42 f0 lea -0x10(%rdx),%rax
if (!netif_running(vxlan->dev) || vxlan == dev)
38d4: 75 ce jne 38a4 <vxlan_stop+0x184>
38d6: 66 83 fe 02 cmp $0x2,%si
38da: 0f 84 48 01 00 00 je 3a28 <vxlan_stop+0x308>
38e0: 49 8b 86 68 08 00 00 mov 0x868(%r14),%rax
continue;
if (family == AF_INET && vxlan->vn4_sock != dev->vn4_sock)
38e7: 48 8b 40 10 mov 0x10(%rax),%rax
38eb: 31 f6 xor %esi,%esi
38ed: 4c 8b 68 20 mov 0x20(%rax),%r13
38f1: 4c 89 ef mov %r13,%rdi
38f4: e8 00 00 00 00 callq 38f9 <vxlan_stop+0x1d9>
if (family == AF_INET6 && dev->vn6_sock &&
atomic_read(&dev->vn6_sock->refcnt) == 1)
return false;
#endif
list_for_each_entry(vxlan, &vn->vxlan_list, next) {
38f9: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 3900 <vxlan_stop+0x1e0>
3900: 4c 89 ef mov %r13,%rdi
3903: 49 8d 96 88 08 00 00 lea 0x888(%r14),%rdx
struct sock *sk;
union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
int ifindex = vxlan->default_dst.remote_ifindex;
int ret = -EINVAL;
if (ip->sa.sa_family == AF_INET) {
390a: 89 de mov %ebx,%esi
390c: ff 50 08 callq *0x8(%rax)
390f: 4c 89 ef mov %r13,%rdi
3912: 89 45 bc mov %eax,-0x44(%rbp)
3915: e8 00 00 00 00 callq 391a <vxlan_stop+0x1fa>
lock_sock(sk);
ret = ip_mc_leave_group(sk, &mreq);
release_sock(sk);
#if IS_ENABLED(CONFIG_IPV6)
} else {
sk = vxlan->vn6_sock->sock->sk;
391a: e9 7e fe ff ff jmpq 379d <vxlan_stop+0x7d>
391f: 49 8b be 68 08 00 00 mov 0x868(%r14),%rdi
3926: 48 39 78 28 cmp %rdi,0x28(%rax)
lock_sock(sk);
ret = ipv6_stub->ipv6_sock_mc_drop(sk, ifindex,
392a: 0f 85 67 ff ff ff jne 3897 <vxlan_stop+0x177>
3930: 66 83 78 40 0a cmpw $0xa,0x40(%rax)
&ip->sin6.sin6_addr);
3935: 0f 85 5c ff ff ff jne 3897 <vxlan_stop+0x177>
release_sock(sk);
#if IS_ENABLED(CONFIG_IPV6)
} else {
sk = vxlan->vn6_sock->sock->sk;
lock_sock(sk);
ret = ipv6_stub->ipv6_sock_mc_drop(sk, ifindex,
393b: 48 8b 78 48 mov 0x48(%rax),%rdi
&ip->sin6.sin6_addr);
release_sock(sk);
393f: 48 8b 50 50 mov 0x50(%rax),%rdx
release_sock(sk);
#if IS_ENABLED(CONFIG_IPV6)
} else {
sk = vxlan->vn6_sock->sock->sk;
lock_sock(sk);
ret = ipv6_stub->ipv6_sock_mc_drop(sk, ifindex,
3943: 49 33 be 88 08 00 00 xor 0x888(%r14),%rdi
&ip->sin6.sin6_addr);
release_sock(sk);
394a: 49 33 96 90 08 00 00 xor 0x890(%r14),%rdx
continue;
if (family == AF_INET && vxlan->vn4_sock != dev->vn4_sock)
continue;
#if IS_ENABLED(CONFIG_IPV6)
if (family == AF_INET6 && vxlan->vn6_sock != dev->vn6_sock)
3951: 48 09 d7 or %rdx,%rdi
3954: 0f 94 c2 sete %dl
3957: eb 0d jmp 3966 <vxlan_stop+0x246>
3959: 41 8b be 84 08 00 00 mov 0x884(%r14),%edi
#if IS_ENABLED(CONFIG_IPV6)
static inline
bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
{
if (a->sa.sa_family != b->sa.sa_family)
3960: 39 78 44 cmp %edi,0x44(%rax)
3963: 0f 94 c2 sete %dl
3966: 84 d2 test %dl,%dl
3968: 0f 84 29 ff ff ff je 3897 <vxlan_stop+0x177>
396e: 39 58 64 cmp %ebx,0x64(%rax)
3971: 0f 85 20 ff ff ff jne 3897 <vxlan_stop+0x177>
3977: e9 d8 fe ff ff jmpq 3854 <vxlan_stop+0x134>
397c: 49 8b 96 60 08 00 00 mov 0x860(%r14),%rdx
3983: 48 85 d2 test %rdx,%rdx
3986: 74 0b je 3993 <vxlan_stop+0x273>
3988: 8b 82 18 20 00 00 mov 0x2018(%rdx),%eax
return false;
if (a->sa.sa_family == AF_INET6)
return ipv6_addr_equal(&a->sin6.sin6_addr, &b->sin6.sin6_addr);
else
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
398e: 83 f8 01 cmp $0x1,%eax
3991: 74 10 je 39a3 <vxlan_stop+0x283>
3993: 4c 8b 01 mov (%rcx),%r8
#if IS_ENABLED(CONFIG_IPV6)
if (family == AF_INET6 && vxlan->vn6_sock != dev->vn6_sock)
continue;
#endif
if (!vxlan_addr_equal(&vxlan->default_dst.remote_ip,
3996: 4c 39 c1 cmp %r8,%rcx
3999: 49 8d 40 f0 lea -0x10(%r8),%rax
399d: 0f 85 d7 fe ff ff jne 387a <vxlan_stop+0x15a>
&dev->default_dst.remote_ip))
continue;
if (vxlan->default_dst.remote_ifindex !=
39a3: 41 8b 9e a4 08 00 00 mov 0x8a4(%r14),%ebx
39aa: 48 8b 42 10 mov 0x10(%rdx),%rax
unsigned short family = dev->default_dst.remote_ip.sa.sa_family;
/* The vxlan_sock is only used by dev, leaving group has
* no effect on other vxlan devices.
*/
if (family == AF_INET && dev->vn4_sock &&
39ae: 89 5d cc mov %ebx,-0x34(%rbp)
39b1: 31 f6 xor %esi,%esi
39b3: 48 c7 45 c4 00 00 00 movq $0x0,-0x3c(%rbp)
39ba: 00
39bb: 89 7d c4 mov %edi,-0x3c(%rbp)
39be: 48 8b 58 20 mov 0x20(%rax),%rbx
39c2: 48 89 df mov %rbx,%rdi
if (family == AF_INET6 && dev->vn6_sock &&
atomic_read(&dev->vn6_sock->refcnt) == 1)
return false;
#endif
list_for_each_entry(vxlan, &vn->vxlan_list, next) {
39c5: e8 00 00 00 00 callq 39ca <vxlan_stop+0x2aa>
39ca: 48 8d 75 c4 lea -0x3c(%rbp),%rsi
39ce: 48 89 df mov %rbx,%rdi
39d1: e8 00 00 00 00 callq 39d6 <vxlan_stop+0x2b6>
/* Inverse of vxlan_igmp_join when last VNI is brought down */
static int vxlan_igmp_leave(struct vxlan_dev *vxlan)
{
struct sock *sk;
union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
int ifindex = vxlan->default_dst.remote_ifindex;
39d6: 48 89 df mov %rbx,%rdi
39d9: 89 45 bc mov %eax,-0x44(%rbp)
struct ip_mreqn mreq = {
.imr_multiaddr.s_addr = ip->sin.sin_addr.s_addr,
.imr_ifindex = ifindex,
};
sk = vxlan->vn4_sock->sock->sk;
39dc: e8 00 00 00 00 callq 39e1 <vxlan_stop+0x2c1>
39e1: e9 b7 fd ff ff jmpq 379d <vxlan_stop+0x7d>
union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
int ifindex = vxlan->default_dst.remote_ifindex;
int ret = -EINVAL;
if (ip->sa.sa_family == AF_INET) {
struct ip_mreqn mreq = {
39e6: 48 8b 87 68 08 00 00 mov 0x868(%rdi),%rax
39ed: 48 85 c0 test %rax,%rax
.imr_multiaddr.s_addr = ip->sin.sin_addr.s_addr,
.imr_ifindex = ifindex,
};
sk = vxlan->vn4_sock->sock->sk;
39f0: 0f 84 74 fe ff ff je 386a <vxlan_stop+0x14a>
39f6: 8b 90 18 20 00 00 mov 0x2018(%rax),%edx
lock_sock(sk);
ret = ip_mc_leave_group(sk, &mreq);
39fc: 83 fa 01 cmp $0x1,%edx
39ff: 0f 85 65 fe ff ff jne 386a <vxlan_stop+0x14a>
3a05: 8b 9f a4 08 00 00 mov 0x8a4(%rdi),%ebx
3a0b: e9 d7 fe ff ff jmpq 38e7 <vxlan_stop+0x1c7>
release_sock(sk);
3a10: 41 8b 9e a4 08 00 00 mov 0x8a4(%r14),%ebx
*/
if (family == AF_INET && dev->vn4_sock &&
atomic_read(&dev->vn4_sock->refcnt) == 1)
return false;
#if IS_ENABLED(CONFIG_IPV6)
if (family == AF_INET6 && dev->vn6_sock &&
3a17: 49 8b 86 68 08 00 00 mov 0x868(%r14),%rax
3a1e: e9 c4 fe ff ff jmpq 38e7 <vxlan_stop+0x1c7>
3a23: e8 00 00 00 00 callq 3a28 <vxlan_stop+0x308>
3a28: 41 8b be 84 08 00 00 mov 0x884(%r14),%edi
3a2f: 49 8b 96 60 08 00 00 mov 0x860(%r14),%rdx
/* Inverse of vxlan_igmp_join when last VNI is brought down */
static int vxlan_igmp_leave(struct vxlan_dev *vxlan)
{
struct sock *sk;
union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
int ifindex = vxlan->default_dst.remote_ifindex;
3a36: e9 6f ff ff ff jmpq 39aa <vxlan_stop+0x28a>
3a3b: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
0000000000003a40 <vxlan_encap_bypass.isra.47>:
3a40: e8 00 00 00 00 callq 3a45 <vxlan_encap_bypass.isra.47+0x5>
3a45: 55 push %rbp
3a46: 49 89 d0 mov %rdx,%r8
3a49: 48 89 e5 mov %rsp,%rbp
3a4c: 41 57 push %r15
3a4e: 41 56 push %r14
3a50: 41 55 push %r13
3a52: 41 54 push %r12
vxlan_flush(vxlan);
vxlan_sock_release(vxlan);
return ret;
}
3a54: 53 push %rbx
3a55: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
3a59: 48 83 ec 30 sub $0x30,%rsp
3a5d: 48 8b 5f 20 mov 0x20(%rdi),%rbx
3a61: 4c 63 b7 80 00 00 00 movslq 0x80(%rdi),%r14
3a68: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
3a6f: 00 00
return ndst;
}
#endif
/* Bypass encapsulation if the destination is local */
static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan,
3a71: 48 89 44 24 28 mov %rax,0x28(%rsp)
3a76: 31 c0 xor %eax,%eax
3a78: 4c 8b a6 88 04 00 00 mov 0x488(%rsi),%r12
3a7f: 65 4c 03 25 00 00 00 add %gs:0x0(%rip),%r12 # 3a87 <vxlan_encap_bypass.isra.47+0x47>
3a86: 00
3a87: 48 8b 42 30 mov 0x30(%rdx),%rax
3a8b: 4c 8b a8 88 04 00 00 mov 0x488(%rax),%r13
{
struct pcpu_sw_netstats *tx_stats, *rx_stats;
union vxlan_addr loopback;
union vxlan_addr *remote_ip = &dst_vxlan->default_dst.remote_ip;
struct net_device *dev = skb->dev;
int len = skb->len;
3a92: 65 4c 03 2d 00 00 00 add %gs:0x0(%rip),%r13 # 3a9a <vxlan_encap_bypass.isra.47+0x5a>
3a99: 00
return ndst;
}
#endif
/* Bypass encapsulation if the destination is local */
static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan,
3a9a: 0f b7 87 c4 00 00 00 movzwl 0xc4(%rdi),%eax
3aa1: 48 8b b7 d0 00 00 00 mov 0xd0(%rdi),%rsi
union vxlan_addr loopback;
union vxlan_addr *remote_ip = &dst_vxlan->default_dst.remote_ip;
struct net_device *dev = skb->dev;
int len = skb->len;
tx_stats = this_cpu_ptr(src_vxlan->dev->tstats);
3aa8: 4c 8b 8f d8 00 00 00 mov 0xd8(%rdi),%r9
3aaf: 80 a7 90 00 00 00 f8 andb $0xf8,0x90(%rdi)
3ab6: 80 a7 92 00 00 00 fd andb $0xfd,0x92(%rdi)
rx_stats = this_cpu_ptr(dst_vxlan->dev->tstats);
3abd: 48 8b 4a 30 mov 0x30(%rdx),%rcx
3ac1: 44 89 f2 mov %r14d,%edx
3ac4: 48 01 f0 add %rsi,%rax
3ac7: 4c 29 c8 sub %r9,%rax
return skb->inner_transport_header - skb->inner_network_header;
}
static inline int skb_network_offset(const struct sk_buff *skb)
{
return skb_network_header(skb) - skb->data;
3aca: 29 c2 sub %eax,%edx
3acc: 3b 97 84 00 00 00 cmp 0x84(%rdi),%edx
3ad2: 48 89 4f 20 mov %rcx,0x20(%rdi)
3ad6: 89 97 80 00 00 00 mov %edx,0x80(%rdi)
3adc: 0f 82 be 00 00 00 jb 3ba0 <vxlan_encap_bypass.isra.47+0x160>
skb->pkt_type = PACKET_HOST;
3ae2: 89 c0 mov %eax,%eax
3ae4: 4c 01 c8 add %r9,%rax
skb->encapsulation = 0;
3ae7: 48 89 87 d8 00 00 00 mov %rax,0xd8(%rdi)
skb->dev = dst_vxlan->dev;
3aee: 66 41 83 78 40 02 cmpw $0x2,0x40(%r8)
3af4: 0f 84 90 00 00 00 je 3b8a <vxlan_encap_bypass.isra.47+0x14a>
}
unsigned char *skb_pull(struct sk_buff *skb, unsigned int len);
static inline unsigned char *__skb_pull(struct sk_buff *skb, unsigned int len)
{
skb->len -= len;
3afa: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 3b01 <vxlan_encap_bypass.isra.47+0xc1>
BUG_ON(skb->len < skb->data_len);
3b01: 48 8b 15 00 00 00 00 mov 0x0(%rip),%rdx # 3b08 <vxlan_encap_bypass.isra.47+0xc8>
}
unsigned char *skb_pull(struct sk_buff *skb, unsigned int len);
static inline unsigned char *__skb_pull(struct sk_buff *skb, unsigned int len)
{
skb->len -= len;
3b08: 48 89 44 24 08 mov %rax,0x8(%rsp)
BUG_ON(skb->len < skb->data_len);
3b0d: b8 0a 00 00 00 mov $0xa,%eax
return skb->data += len;
3b12: 48 89 54 24 10 mov %rdx,0x10(%rsp)
3b17: 66 89 04 24 mov %ax,(%rsp)
3b1b: 41 f6 80 98 00 00 00 testb $0x1,0x98(%r8)
3b22: 01
__skb_pull(skb, skb_network_offset(skb));
if (remote_ip->sa.sa_family == AF_INET) {
3b23: 49 89 ff mov %rdi,%r15
3b26: 75 49 jne 3b71 <vxlan_encap_bypass.isra.47+0x131>
3b28: 49 83 44 24 10 01 addq $0x1,0x10(%r12)
loopback.sin.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
loopback.sa.sa_family = AF_INET;
#if IS_ENABLED(CONFIG_IPV6)
} else {
loopback.sin6.sin6_addr = in6addr_loopback;
3b2e: 4d 01 74 24 18 add %r14,0x18(%r12)
3b33: 4c 89 ff mov %r15,%rdi
3b36: e8 00 00 00 00 callq 3b3b <vxlan_encap_bypass.isra.47+0xfb>
3b3b: 85 c0 test %eax,%eax
loopback.sa.sa_family = AF_INET6;
3b3d: 74 27 je 3b66 <vxlan_encap_bypass.isra.47+0x126>
3b3f: 48 83 83 60 01 00 00 addq $0x1,0x160(%rbx)
3b46: 01
3b47: 48 8b 44 24 28 mov 0x28(%rsp),%rax
#endif
}
if (dst_vxlan->flags & VXLAN_F_LEARN)
3b4c: 65 48 33 04 25 28 00 xor %gs:0x28,%rax
3b53: 00 00
3b55: 75 4b jne 3ba2 <vxlan_encap_bypass.isra.47+0x162>
3b57: 48 8d 65 d8 lea -0x28(%rbp),%rsp
vxlan_snoop(skb->dev, &loopback, eth_hdr(skb)->h_source);
u64_stats_update_begin(&tx_stats->syncp);
tx_stats->tx_packets++;
3b5b: 5b pop %rbx
3b5c: 41 5c pop %r12
tx_stats->tx_bytes += len;
3b5e: 41 5d pop %r13
3b60: 41 5e pop %r14
3b62: 41 5f pop %r15
u64_stats_update_end(&tx_stats->syncp);
if (netif_rx(skb) == NET_RX_SUCCESS) {
3b64: 5d pop %rbp
3b65: c3 retq
3b66: 49 83 45 00 01 addq $0x1,0x0(%r13)
3b6b: 4d 01 75 08 add %r14,0x8(%r13)
u64_stats_update_begin(&rx_stats->syncp);
rx_stats->rx_packets++;
rx_stats->rx_bytes += len;
u64_stats_update_end(&rx_stats->syncp);
} else {
dev->stats.rx_dropped++;
3b6f: eb d6 jmp 3b47 <vxlan_encap_bypass.isra.47+0x107>
3b71: 0f b7 87 c6 00 00 00 movzwl 0xc6(%rdi),%eax
}
}
3b78: 48 89 cf mov %rcx,%rdi
3b7b: 48 8d 54 06 06 lea 0x6(%rsi,%rax,1),%rdx
3b80: 48 89 e6 mov %rsp,%rsi
3b83: e8 a8 f7 ff ff callq 3330 <vxlan_snoop>
3b88: eb 9e jmp 3b28 <vxlan_encap_bypass.isra.47+0xe8>
3b8a: ba 02 00 00 00 mov $0x2,%edx
3b8f: c7 44 24 04 7f 00 00 movl $0x100007f,0x4(%rsp)
3b96: 01
tx_stats->tx_bytes += len;
u64_stats_update_end(&tx_stats->syncp);
if (netif_rx(skb) == NET_RX_SUCCESS) {
u64_stats_update_begin(&rx_stats->syncp);
rx_stats->rx_packets++;
3b97: 66 89 14 24 mov %dx,(%rsp)
rx_stats->rx_bytes += len;
3b9b: e9 7b ff ff ff jmpq 3b1b <vxlan_encap_bypass.isra.47+0xdb>
3ba0: 0f 0b ud2
loopback.sa.sa_family = AF_INET6;
#endif
}
if (dst_vxlan->flags & VXLAN_F_LEARN)
vxlan_snoop(skb->dev, &loopback, eth_hdr(skb)->h_source);
3ba2: e8 00 00 00 00 callq 3ba7 <vxlan_encap_bypass.isra.47+0x167>
3ba7: 66 0f 1f 84 00 00 00 nopw 0x0(%rax,%rax,1)
3bae: 00 00
0000000000003bb0 <vxlan_exit_net>:
3bb0: e8 00 00 00 00 callq 3bb5 <vxlan_exit_net+0x5>
3bb5: 55 push %rbp
3bb6: 48 89 e5 mov %rsp,%rbp
3bb9: 41 57 push %r15
skb->dev = dst_vxlan->dev;
__skb_pull(skb, skb_network_offset(skb));
if (remote_ip->sa.sa_family == AF_INET) {
loopback.sin.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
loopback.sa.sa_family = AF_INET;
3bbb: 41 56 push %r14
3bbd: 41 55 push %r13
skb->encapsulation = 0;
skb->dev = dst_vxlan->dev;
__skb_pull(skb, skb_network_offset(skb));
if (remote_ip->sa.sa_family == AF_INET) {
loopback.sin.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
3bbf: 41 54 push %r12
3bc1: 49 89 fe mov %rdi,%r14
3bc4: 53 push %rbx
3bc5: 4d 8d a6 10 01 00 00 lea 0x110(%r14),%r12
loopback.sa.sa_family = AF_INET;
3bcc: 48 83 ec 20 sub $0x20,%rsp
unsigned char *skb_pull(struct sk_buff *skb, unsigned int len);
static inline unsigned char *__skb_pull(struct sk_buff *skb, unsigned int len)
{
skb->len -= len;
BUG_ON(skb->len < skb->data_len);
3bd0: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
3bd7: 00 00
rx_stats->rx_bytes += len;
u64_stats_update_end(&rx_stats->syncp);
} else {
dev->stats.rx_dropped++;
}
}
3bd9: 48 89 45 d0 mov %rax,-0x30(%rbp)
3bdd: 31 c0 xor %eax,%eax
3bdf: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # 3be5 <vxlan_exit_net+0x35>
return 0;
}
static void __net_exit vxlan_exit_net(struct net *net)
{
3be5: 48 8b 97 88 14 00 00 mov 0x1488(%rdi),%rdx
3bec: 83 e8 01 sub $0x1,%eax
3bef: 48 98 cltq
3bf1: 48 8b 44 c2 18 mov 0x18(%rdx,%rax,8),%rax
struct vxlan_dev *vxlan, *next;
struct net_device *dev, *aux;
LIST_HEAD(list);
rtnl_lock();
for_each_netdev_safe(net, dev, aux)
3bf6: 48 89 45 b8 mov %rax,-0x48(%rbp)
3bfa: 48 8d 45 c0 lea -0x40(%rbp),%rax
return 0;
}
static void __net_exit vxlan_exit_net(struct net *net)
{
3bfe: 48 89 45 c0 mov %rax,-0x40(%rbp)
3c02: 48 89 45 c8 mov %rax,-0x38(%rbp)
3c06: e8 00 00 00 00 callq 3c0b <vxlan_exit_net+0x5b>
3c0b: 49 8b 86 10 01 00 00 mov 0x110(%r14),%rax
3c12: 48 8b 30 mov (%rax),%rsi
3c15: 49 39 c4 cmp %rax,%r12
3c18: 48 8d 78 b0 lea -0x50(%rax),%rdi
3c1c: 48 8d 5e b0 lea -0x50(%rsi),%rbx
3c20: 75 05 jne 3c27 <vxlan_exit_net+0x77>
3c22: eb 28 jmp 3c4c <vxlan_exit_net+0x9c>
3c24: 48 89 c3 mov %rax,%rbx
3c27: 48 81 bf 98 07 00 00 cmpq $0x0,0x798(%rdi)
3c2e: 00 00 00 00
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
struct vxlan_dev *vxlan, *next;
struct net_device *dev, *aux;
LIST_HEAD(list);
3c32: 0f 84 3a 01 00 00 je 3d72 <vxlan_exit_net+0x1c2>
rtnl_lock();
3c38: 48 8b 43 50 mov 0x50(%rbx),%rax
for_each_netdev_safe(net, dev, aux)
3c3c: 48 8d 53 50 lea 0x50(%rbx),%rdx
3c40: 48 89 df mov %rbx,%rdi
3c43: 48 83 e8 50 sub $0x50,%rax
3c47: 49 39 d4 cmp %rdx,%r12
3c4a: 75 d8 jne 3c24 <vxlan_exit_net+0x74>
3c4c: 48 8b 75 b8 mov -0x48(%rbp),%rsi
3c50: 48 8b 06 mov (%rsi),%rax
3c53: 48 8b 18 mov (%rax),%rbx
3c56: 48 39 c6 cmp %rax,%rsi
if (dev->rtnl_link_ops == &vxlan_link_ops)
3c59: 4c 8d 60 f0 lea -0x10(%rax),%r12
3c5d: 4c 8d 6b f0 lea -0x10(%rbx),%r13
3c61: 0f 84 df 00 00 00 je 3d46 <vxlan_exit_net+0x196>
3c67: 49 8b 7c 24 30 mov 0x30(%r12),%rdi
struct vxlan_dev *vxlan, *next;
struct net_device *dev, *aux;
LIST_HEAD(list);
rtnl_lock();
for_each_netdev_safe(net, dev, aux)
3c6c: 4c 3b b7 80 04 00 00 cmp 0x480(%rdi),%r14
3c73: 0f 84 b0 00 00 00 je 3d29 <vxlan_exit_net+0x179>
3c79: 49 83 bc 24 f0 00 00 cmpq $0x0,0xf0(%r12)
3c80: 00 00
if (dev->rtnl_link_ops == &vxlan_link_ops)
unregister_netdevice_queue(dev, &list);
list_for_each_entry_safe(vxlan, next, &vn->vxlan_list, next) {
3c82: bb ff ff ff ff mov $0xffffffff,%ebx
3c87: 0f 84 93 00 00 00 je 3d20 <vxlan_exit_net+0x170>
3c8d: 8d 53 01 lea 0x1(%rbx),%edx
3c90: be 00 01 00 00 mov $0x100,%esi
3c95: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
/* If vxlan->dev is in the same netns, it has already been added
* to the list by the previous loop.
*/
if (!net_eq(dev_net(vxlan->dev), net)) {
3c9c: 48 63 d2 movslq %edx,%rdx
3c9f: e8 00 00 00 00 callq 3ca4 <vxlan_exit_net+0xf4>
3ca4: 3b 05 00 00 00 00 cmp 0x0(%rip),%eax # 3caa <vxlan_exit_net+0xfa>
static inline void gro_cells_destroy(struct gro_cells *gcells)
{
int i;
if (!gcells->cells)
3caa: 89 c3 mov %eax,%ebx
3cac: 7d 54 jge 3d02 <vxlan_exit_net+0x152>
3cae: 48 98 cltq
3cb0: 49 8b 94 24 f0 00 00 mov 0xf0(%r12),%rdx
3cb7: 00
3cb8: 48 03 14 c5 00 00 00 add 0x0(,%rax,8),%rdx
3cbf: 00
3cc0: 48 8d 7a 18 lea 0x18(%rdx),%rdi
3cc4: 49 89 d7 mov %rdx,%r15
3cc7: e8 00 00 00 00 callq 3ccc <vxlan_exit_net+0x11c>
3ccc: 49 8b 3f mov (%r15),%rdi
3ccf: 49 39 ff cmp %rdi,%r15
3cd2: 74 b9 je 3c8d <vxlan_exit_net+0xdd>
return;
for_each_possible_cpu(i) {
3cd4: 48 85 ff test %rdi,%rdi
3cd7: 74 b4 je 3c8d <vxlan_exit_net+0xdd>
3cd9: 41 83 6f 10 01 subl $0x1,0x10(%r15)
struct gro_cell *cell = per_cpu_ptr(gcells->cells, i);
3cde: 48 8b 0f mov (%rdi),%rcx
3ce1: 48 8b 57 08 mov 0x8(%rdi),%rdx
3ce5: 48 c7 07 00 00 00 00 movq $0x0,(%rdi)
3cec: 48 c7 47 08 00 00 00 movq $0x0,0x8(%rdi)
3cf3: 00
3cf4: 48 89 51 08 mov %rdx,0x8(%rcx)
netif_napi_del(&cell->napi);
3cf8: 48 89 0a mov %rcx,(%rdx)
3cfb: e8 00 00 00 00 callq 3d00 <vxlan_exit_net+0x150>
*/
static inline struct sk_buff *skb_peek(const struct sk_buff_head *list_)
{
struct sk_buff *skb = list_->next;
if (skb == (struct sk_buff *)list_)
3d00: eb ca jmp 3ccc <vxlan_exit_net+0x11c>
3d02: 49 8b bc 24 f0 00 00 mov 0xf0(%r12),%rdi
3d09: 00
void skb_unlink(struct sk_buff *skb, struct sk_buff_head *list);
static inline void __skb_unlink(struct sk_buff *skb, struct sk_buff_head *list)
{
struct sk_buff *next, *prev;
list->qlen--;
3d0a: e8 00 00 00 00 callq 3d0f <vxlan_exit_net+0x15f>
next = skb->next;
3d0f: 49 8b 7c 24 30 mov 0x30(%r12),%rdi
prev = skb->prev;
3d14: 49 c7 84 24 f0 00 00 movq $0x0,0xf0(%r12)
3d1b: 00 00 00 00 00
skb->next = skb->prev = NULL;
3d20: 48 8d 75 c0 lea -0x40(%rbp),%rsi
next->prev = prev;
3d24: e8 00 00 00 00 callq 3d29 <vxlan_exit_net+0x179>
prev->next = next;
3d29: 49 8b 45 10 mov 0x10(%r13),%rax
void skb_queue_purge(struct sk_buff_head *list);
static inline void __skb_queue_purge(struct sk_buff_head *list)
{
struct sk_buff *skb;
while ((skb = __skb_dequeue(list)) != NULL)
kfree_skb(skb);
3d2d: 49 8d 55 10 lea 0x10(%r13),%rdx
3d31: 4d 89 ec mov %r13,%r12
__skb_queue_purge(&cell->napi_skbs);
}
free_percpu(gcells->cells);
3d34: 48 83 e8 10 sub $0x10,%rax
3d38: 48 39 55 b8 cmp %rdx,-0x48(%rbp)
3d3c: 74 08 je 3d46 <vxlan_exit_net+0x196>
3d3e: 49 89 c5 mov %rax,%r13
3d41: e9 21 ff ff ff jmpq 3c67 <vxlan_exit_net+0xb7>
gcells->cells = NULL;
3d46: 48 8d 7d c0 lea -0x40(%rbp),%rdi
3d4a: e8 00 00 00 00 callq 3d4f <vxlan_exit_net+0x19f>
3d4f: e8 00 00 00 00 callq 3d54 <vxlan_exit_net+0x1a4>
gro_cells_destroy(&vxlan->gro_cells);
unregister_netdevice_queue(vxlan->dev, &list);
3d54: 48 8b 45 d0 mov -0x30(%rbp),%rax
3d58: 65 48 33 04 25 28 00 xor %gs:0x28,%rax
3d5f: 00 00
rtnl_lock();
for_each_netdev_safe(net, dev, aux)
if (dev->rtnl_link_ops == &vxlan_link_ops)
unregister_netdevice_queue(dev, &list);
list_for_each_entry_safe(vxlan, next, &vn->vxlan_list, next) {
3d61: 75 1d jne 3d80 <vxlan_exit_net+0x1d0>
3d63: 48 83 c4 20 add $0x20,%rsp
3d67: 5b pop %rbx
3d68: 41 5c pop %r12
3d6a: 41 5d pop %r13
3d6c: 41 5e pop %r14
3d6e: 41 5f pop %r15
3d70: 5d pop %rbp
3d71: c3 retq
3d72: 48 8d 75 c0 lea -0x40(%rbp),%rsi
gro_cells_destroy(&vxlan->gro_cells);
unregister_netdevice_queue(vxlan->dev, &list);
}
}
unregister_netdevice_many(&list);
3d76: e8 00 00 00 00 callq 3d7b <vxlan_exit_net+0x1cb>
3d7b: e9 b8 fe ff ff jmpq 3c38 <vxlan_exit_net+0x88>
rtnl_unlock();
3d80: e8 00 00 00 00 callq 3d85 <vxlan_exit_net+0x1d5>
}
3d85: 90 nop
3d86: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
3d8d: 00 00 00
0000000000003d90 <vxlan_build_skb>:
3d90: e8 00 00 00 00 callq 3d95 <vxlan_build_skb+0x5>
3d95: 55 push %rbp
3d96: 48 89 e5 mov %rsp,%rbp
3d99: 41 57 push %r15
3d9b: 41 56 push %r14
3d9d: 41 55 push %r13
3d9f: 41 54 push %r12
3da1: 45 89 ce mov %r9d,%r14d
LIST_HEAD(list);
rtnl_lock();
for_each_netdev_safe(net, dev, aux)
if (dev->rtnl_link_ops == &vxlan_link_ops)
unregister_netdevice_queue(dev, &list);
3da4: 53 push %rbx
3da5: 48 89 fb mov %rdi,%rbx
3da8: 41 89 cc mov %ecx,%r12d
3dab: 48 83 ec 18 sub $0x18,%rsp
3daf: 80 7d 10 01 cmpb $0x1,0x10(%rbp)
}
}
unregister_netdevice_many(&list);
rtnl_unlock();
}
3db3: 4c 89 45 d0 mov %r8,-0x30(%rbp)
3db7: 45 19 ed sbb %r13d,%r13d
3dba: 41 81 e5 00 f8 ff ff and $0xfffff800,%r13d
static int vxlan_build_skb(struct sk_buff *skb, struct dst_entry *dst,
int iphdr_len, __be32 vni,
struct vxlan_metadata *md, u32 vxflags,
bool udp_sum)
{
3dc1: 41 81 c5 00 10 00 00 add $0x1000,%r13d
3dc8: 41 81 e1 00 02 00 00 and $0x200,%r9d
3dcf: 0f 85 d6 00 00 00 jne 3eab <vxlan_build_skb+0x11b>
3dd5: 48 8b 8f d0 00 00 00 mov 0xd0(%rdi),%rcx
3ddc: 48 8b bf d8 00 00 00 mov 0xd8(%rdi),%rdi
3de3: 41 89 f9 mov %edi,%r9d
3de6: 41 29 c9 sub %ecx,%r9d
struct vxlanhdr *vxh;
int min_headroom;
int err;
int type = udp_sum ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL;
3de9: 4c 8b 56 18 mov 0x18(%rsi),%r10
3ded: 44 0f b7 bb aa 00 00 movzwl 0xaa(%rbx),%r15d
3df4: 00
3df5: 0f b7 76 68 movzwl 0x68(%rsi),%esi
__be16 inner_protocol = htons(ETH_P_TEB);
if ((vxflags & VXLAN_F_REMCSUM_TX) &&
3df9: 45 0f b7 9a 4e 02 00 movzwl 0x24e(%r10),%r11d
3e00: 00
3e01: 41 0f b7 82 50 02 00 movzwl 0x250(%r10),%eax
3e08: 00
3e09: 45 89 fa mov %r15d,%r10d
3e0c: 66 41 81 e2 00 10 and $0x1000,%r10w
3e12: 44 01 d8 add %r11d,%eax
3e15: 83 e0 f0 and $0xfffffff0,%eax
3e18: 66 41 83 fa 01 cmp $0x1,%r10w
type |= SKB_GSO_TUNNEL_REMCSUM;
}
min_headroom = LL_RESERVED_SPACE(dst->dev) + dst->header_len
+ VXLAN_HLEN + iphdr_len
+ (skb_vlan_tag_present(skb) ? VLAN_HLEN : 0);
3e1d: 8d 74 30 10 lea 0x10(%rax,%rsi,1),%esi
3e21: 19 c0 sbb %eax,%eax
3e23: f7 d0 not %eax
(skb->csum_offset == offsetof(struct udphdr, check) ||
skb->csum_offset == offsetof(struct tcphdr, check)))
type |= SKB_GSO_TUNNEL_REMCSUM;
}
min_headroom = LL_RESERVED_SPACE(dst->dev) + dst->header_len
3e25: 8d 54 16 10 lea 0x10(%rsi,%rdx,1),%edx
3e29: 83 e0 04 and $0x4,%eax
3e2c: 01 d0 add %edx,%eax
3e2e: f6 83 8e 00 00 00 01 testb $0x1,0x8e(%rbx)
3e35: 0f 84 3e 02 00 00 je 4079 <vxlan_build_skb+0x2e9>
3e3b: 8b 93 cc 00 00 00 mov 0xcc(%rbx),%edx
3e41: 48 01 d1 add %rdx,%rcx
3e44: 8b 51 20 mov 0x20(%rcx),%edx
3e47: 0f b7 ca movzwl %dx,%ecx
3e4a: c1 fa 10 sar $0x10,%edx
3e4d: 29 d1 sub %edx,%ecx
3e4f: 83 f9 01 cmp $0x1,%ecx
3e52: 0f 95 c2 setne %dl
3e55: 31 f6 xor %esi,%esi
3e57: 41 39 c1 cmp %eax,%r9d
3e5a: 0f b6 d2 movzbl %dl,%edx
3e5d: 73 05 jae 3e64 <vxlan_build_skb+0xd4>
*/
static inline int skb_header_cloned(const struct sk_buff *skb)
{
int dataref;
if (!skb->cloned)
3e5f: 44 29 c8 sub %r9d,%eax
3e62: 89 c6 mov %eax,%esi
3e64: 09 f2 or %esi,%edx
3e66: 0f 85 7d 01 00 00 jne 3fe9 <vxlan_build_skb+0x259>
};
#ifdef NET_SKBUFF_DATA_USES_OFFSET
static inline unsigned char *skb_end_pointer(const struct sk_buff *skb)
{
return skb->head + skb->end;
3e6c: 66 45 85 d2 test %r10w,%r10w
3e70: 0f 85 ae 01 00 00 jne 4024 <vxlan_build_skb+0x294>
3e76: 48 85 db test %rbx,%rbx
if (!skb->cloned)
return 0;
dataref = atomic_read(&skb_shinfo(skb)->dataref);
dataref = (dataref & SKB_DATAREF_MASK) - (dataref >> SKB_DATAREF_SHIFT);
return dataref != 1;
3e79: 0f 84 69 03 00 00 je 41e8 <vxlan_build_skb+0x458>
3e7f: 44 89 ee mov %r13d,%esi
3e82: 48 89 df mov %rbx,%rdi
}
static inline int __skb_cow(struct sk_buff *skb, unsigned int headroom,
int cloned)
{
int delta = 0;
3e85: e8 00 00 00 00 callq 3e8a <vxlan_build_skb+0xfa>
if (!skb->cloned)
return 0;
dataref = atomic_read(&skb_shinfo(skb)->dataref);
dataref = (dataref & SKB_DATAREF_MASK) - (dataref >> SKB_DATAREF_SHIFT);
return dataref != 1;
3e8a: 85 c0 test %eax,%eax
3e8c: 41 89 c7 mov %eax,%r15d
int cloned)
{
int delta = 0;
if (headroom > skb_headroom(skb))
delta = headroom - skb_headroom(skb);
3e8f: 74 45 je 3ed6 <vxlan_build_skb+0x146>
3e91: 48 89 df mov %rbx,%rdi
if (delta || cloned)
3e94: e8 00 00 00 00 callq 3e99 <vxlan_build_skb+0x109>
3e99: 48 83 c4 18 add $0x18,%rsp
* Following the skb_unshare() example, in case of error, the calling function
* doesn't have to worry about freeing the original skb.
*/
static inline struct sk_buff *vlan_hwaccel_push_inside(struct sk_buff *skb)
{
if (skb_vlan_tag_present(skb))
3e9d: 44 89 f8 mov %r15d,%eax
3ea0: 5b pop %rbx
3ea1: 41 5c pop %r12
3ea3: 41 5d pop %r13
3ea5: 41 5e pop %r14
err = skb_cow_head(skb, min_headroom);
if (unlikely(err))
goto out_free;
skb = vlan_hwaccel_push_inside(skb);
if (WARN_ON(!skb))
3ea7: 41 5f pop %r15
3ea9: 5d pop %rbp
3eaa: c3 retq
3eab: 0f b6 87 91 00 00 00 movzbl 0x91(%rdi),%eax
return -ENOMEM;
err = iptunnel_handle_offloads(skb, type);
3eb2: 48 8b 8f d0 00 00 00 mov 0xd0(%rdi),%rcx
3eb9: 48 8b bf d8 00 00 00 mov 0xd8(%rdi),%rdi
if (err)
3ec0: 83 e0 06 and $0x6,%eax
skb_set_inner_protocol(skb, inner_protocol);
return 0;
out_free:
kfree_skb(skb);
3ec3: 3c 06 cmp $0x6,%al
3ec5: 0f 84 83 02 00 00 je 414e <vxlan_build_skb+0x3be>
return err;
}
3ecb: 41 89 f9 mov %edi,%r9d
skb_set_inner_protocol(skb, inner_protocol);
return 0;
out_free:
kfree_skb(skb);
return err;
3ece: 41 29 c9 sub %ecx,%r9d
}
3ed1: e9 13 ff ff ff jmpq 3de9 <vxlan_build_skb+0x59>
3ed6: 48 8b 83 d8 00 00 00 mov 0xd8(%rbx),%rax
int min_headroom;
int err;
int type = udp_sum ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL;
__be16 inner_protocol = htons(ETH_P_TEB);
if ((vxflags & VXLAN_F_REMCSUM_TX) &&
3edd: 83 83 80 00 00 00 08 addl $0x8,0x80(%rbx)
3ee4: 41 c1 ec 08 shr $0x8,%r12d
3ee8: 41 81 e5 00 40 00 00 and $0x4000,%r13d
*
* Return the number of bytes of free space at the head of an &sk_buff.
*/
static inline unsigned int skb_headroom(const struct sk_buff *skb)
{
return skb->data - skb->head;
3eef: 48 8d 50 f8 lea -0x8(%rax),%rdx
3ef3: 48 89 93 d8 00 00 00 mov %rdx,0xd8(%rbx)
3efa: c7 40 f8 08 00 00 00 movl $0x8,-0x8(%rax)
3f01: 44 89 60 fc mov %r12d,-0x4(%rax)
3f05: 74 57 je 3f5e <vxlan_build_skb+0x1ce>
}
unsigned char *skb_push(struct sk_buff *skb, unsigned int len);
static inline unsigned char *__skb_push(struct sk_buff *skb, unsigned int len)
{
skb->data -= len;
3f07: 0f b7 8b 98 00 00 00 movzwl 0x98(%rbx),%ecx
skb->len += len;
3f0e: 48 8b 93 d8 00 00 00 mov 0xd8(%rbx),%rdx
static inline __be32 vxlan_vni_field(__be32 vni)
{
#if defined(__BIG_ENDIAN)
return (__force __be32)((__force u32)vni << 8);
#else
return (__force __be32)((__force u32)vni >> 8);
3f15: 48 2b 93 d0 00 00 00 sub 0xd0(%rbx),%rdx
vxh = (struct vxlanhdr *) __skb_push(skb, sizeof(*vxh));
vxh->vx_flags = VXLAN_HF_VNI;
vxh->vx_vni = vxlan_vni_field(vni);
if (type & SKB_GSO_TUNNEL_REMCSUM) {
3f1c: 83 e9 08 sub $0x8,%ecx
}
unsigned char *skb_push(struct sk_buff *skb, unsigned int len);
static inline unsigned char *__skb_push(struct sk_buff *skb, unsigned int len)
{
skb->data -= len;
3f1f: 29 d1 sub %edx,%ecx
3f21: d1 e9 shr %ecx
3f23: 0f c9 bswap %ecx
3f25: 89 ca mov %ecx,%edx
3f27: 81 c9 00 00 00 80 or $0x80000000,%ecx
err = iptunnel_handle_offloads(skb, type);
if (err)
goto out_free;
vxh = (struct vxlanhdr *) __skb_push(skb, sizeof(*vxh));
vxh->vx_flags = VXLAN_HF_VNI;
3f2d: 66 83 bb 9a 00 00 00 cmpw $0x6,0x9a(%rbx)
3f34: 06
vxh->vx_vni = vxlan_vni_field(vni);
if (type & SKB_GSO_TUNNEL_REMCSUM) {
3f35: c7 40 f8 08 20 00 00 movl $0x2008,-0x8(%rax)
offsetof(struct tcphdr, check);
}
static inline __be32 vxlan_compute_rco(unsigned int start, unsigned int offset)
{
__be32 vni_field = cpu_to_be32(start >> VXLAN_RCO_SHIFT);
3f3c: 0f 44 d1 cmove %ecx,%edx
3f3f: 44 09 e2 or %r12d,%edx
3f42: 89 50 fc mov %edx,-0x4(%rax)
3f45: 8b 93 cc 00 00 00 mov 0xcc(%rbx),%edx
3f4b: 48 8b 8b d0 00 00 00 mov 0xd0(%rbx),%rcx
3f52: 66 83 7c 11 02 00 cmpw $0x0,0x2(%rcx,%rdx,1)
if (offset == offsetof(struct udphdr, check))
vni_field |= VXLAN_RCO_UDP;
3f58: 0f 84 39 02 00 00 je 4197 <vxlan_build_skb+0x407>
3f5e: 41 f7 c6 00 08 00 00 test $0x800,%r14d
unsigned int start;
start = skb_checksum_start_offset(skb) - sizeof(struct vxlanhdr);
vxh->vx_vni |= vxlan_compute_rco(start, skb->csum_offset);
vxh->vx_flags |= VXLAN_HF_RCO;
3f65: 74 3b je 3fa2 <vxlan_build_skb+0x212>
3f67: 48 8b 7d d0 mov -0x30(%rbp),%rdi
3f6b: 8b 17 mov (%rdi),%edx
3f6d: 85 d2 test %edx,%edx
if (type & SKB_GSO_TUNNEL_REMCSUM) {
unsigned int start;
start = skb_checksum_start_offset(skb) - sizeof(struct vxlanhdr);
vxh->vx_vni |= vxlan_compute_rco(start, skb->csum_offset);
3f6f: 74 31 je 3fa2 <vxlan_build_skb+0x212>
3f71: 81 48 f8 80 00 00 00 orl $0x80,-0x8(%rax)
return csum_fold(csum_partial(csum_start, plen, partial));
}
static inline bool skb_is_gso(const struct sk_buff *skb)
{
return skb_shinfo(skb)->gso_size;
3f78: 8b 17 mov (%rdi),%edx
3f7a: f7 c2 00 00 40 00 test $0x400000,%edx
3f80: 74 06 je 3f88 <vxlan_build_skb+0x1f8>
vxh->vx_flags |= VXLAN_HF_RCO;
if (!skb_is_gso(skb)) {
3f82: 80 48 f9 40 orb $0x40,-0x7(%rax)
3f86: 8b 17 mov (%rdi),%edx
3f88: f7 c2 00 00 08 00 test $0x80000,%edx
skb->ip_summed = CHECKSUM_NONE;
skb->encapsulation = 0;
}
}
if (vxflags & VXLAN_F_GBP)
3f8e: 74 0a je 3f9a <vxlan_build_skb+0x20a>
3f90: 48 8b 7d d0 mov -0x30(%rbp),%rdi
3f94: 80 48 f9 08 orb $0x8,-0x7(%rax)
static void vxlan_build_gbp_hdr(struct vxlanhdr *vxh, u32 vxflags,
struct vxlan_metadata *md)
{
struct vxlanhdr_gbp *gbp;
if (!md->gbp)
3f98: 8b 17 mov (%rdi),%edx
3f9a: 66 c1 c2 08 rol $0x8,%dx
3f9e: 66 89 50 fa mov %dx,-0x6(%rax)
return;
gbp = (struct vxlanhdr_gbp *)vxh;
vxh->vx_flags |= VXLAN_HF_GBP;
3fa2: 41 81 e6 00 40 00 00 and $0x4000,%r14d
if (md->gbp & VXLAN_GBP_DONT_LEARN)
3fa9: 0f 84 25 02 00 00 je 41d4 <vxlan_build_skb+0x444>
3faf: 0f b7 93 c0 00 00 00 movzwl 0xc0(%rbx),%edx
gbp->dont_learn = 1;
3fb6: 80 48 f8 04 orb $0x4,-0x8(%rax)
if (md->gbp & VXLAN_GBP_POLICY_APPLIED)
3fba: 66 81 fa 65 58 cmp $0x5865,%dx
3fbf: 0f 84 e5 01 00 00 je 41aa <vxlan_build_skb+0x41a>
gbp->policy_applied = 1;
3fc5: 66 81 fa 86 dd cmp $0xdd86,%dx
gbp->policy_id = htons(md->gbp & VXLAN_GBP_ID_MASK);
3fca: 0f 84 3d 02 00 00 je 420d <vxlan_build_skb+0x47d>
3fd0: 66 83 fa 08 cmp $0x8,%dx
}
}
if (vxflags & VXLAN_F_GBP)
vxlan_build_gbp_hdr(vxh, vxflags, md);
if (vxflags & VXLAN_F_GPE) {
3fd4: 41 bf a0 ff ff ff mov $0xffffffa0,%r15d
3fda: 0f 85 b1 fe ff ff jne 3e91 <vxlan_build_skb+0x101>
err = vxlan_build_gpe_hdr(vxh, vxflags, skb->protocol);
3fe0: c6 40 fb 01 movb $0x1,-0x5(%rax)
3fe4: e9 c5 01 00 00 jmpq 41ae <vxlan_build_skb+0x41e>
static int vxlan_build_gpe_hdr(struct vxlanhdr *vxh, u32 vxflags,
__be16 protocol)
{
struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)vxh;
gpe->np_applied = 1;
3fe9: 83 c6 3f add $0x3f,%esi
switch (protocol) {
3fec: 31 d2 xor %edx,%edx
3fee: b9 20 00 08 02 mov $0x2080020,%ecx
3ff3: 83 e6 c0 and $0xffffffc0,%esi
3ff6: 48 89 df mov %rbx,%rdi
3ff9: e8 00 00 00 00 callq 3ffe <vxlan_build_skb+0x26e>
3ffe: 85 c0 test %eax,%eax
4000: 41 89 c7 mov %eax,%r15d
4003: 0f 85 88 fe ff ff jne 3e91 <vxlan_build_skb+0x101>
return 0;
case htons(ETH_P_TEB):
gpe->next_protocol = VXLAN_GPE_NP_ETHERNET;
return 0;
}
return -EPFNOSUPPORT;
4009: 44 0f b7 bb aa 00 00 movzwl 0xaa(%rbx),%r15d
4010: 00
gpe->np_applied = 1;
switch (protocol) {
case htons(ETH_P_IP):
gpe->next_protocol = VXLAN_GPE_NP_IPV4;
4011: 45 89 fa mov %r15d,%r10d
4014: 66 41 81 e2 00 10 and $0x1000,%r10w
if (headroom > skb_headroom(skb))
delta = headroom - skb_headroom(skb);
if (delta || cloned)
return pskb_expand_head(skb, ALIGN(delta, NET_SKB_PAD), 0,
401a: 66 45 85 d2 test %r10w,%r10w
401e: 0f 84 52 fe ff ff je 3e76 <vxlan_build_skb+0xe6>
4024: 66 41 81 e7 ff ef and $0xefff,%r15w
402a: f6 83 8e 00 00 00 01 testb $0x1,0x8e(%rbx)
4031: 44 0f b7 8b a8 00 00 movzwl 0xa8(%rbx),%r9d
4038: 00
+ VXLAN_HLEN + iphdr_len
+ (skb_vlan_tag_present(skb) ? VLAN_HLEN : 0);
/* Need space for new headers (invalidates iph ptr) */
err = skb_cow_head(skb, min_headroom);
if (unlikely(err))
4039: 0f 84 d4 01 00 00 je 4213 <vxlan_build_skb+0x483>
403f: 8b 83 cc 00 00 00 mov 0xcc(%rbx),%eax
4045: 48 8b 8b d0 00 00 00 mov 0xd0(%rbx),%rcx
404c: 48 01 c8 add %rcx,%rax
404f: 8b 40 20 mov 0x20(%rax),%eax
4052: 0f b7 d0 movzwl %ax,%edx
4055: c1 f8 10 sar $0x10,%eax
4058: 29 c2 sub %eax,%edx
*/
static inline int skb_header_cloned(const struct sk_buff *skb)
{
int dataref;
if (!skb->cloned)
405a: 89 d0 mov %edx,%eax
405c: 31 d2 xor %edx,%edx
405e: 83 f8 01 cmp $0x1,%eax
* Following the skb_unshare() example, in case of error, the calling function
* doesn't have to worry about freeing the original skb.
*/
static inline struct sk_buff *__vlan_hwaccel_push_inside(struct sk_buff *skb)
{
skb = vlan_insert_tag_set_proto(skb, skb->vlan_proto,
4061: 48 8b 83 d8 00 00 00 mov 0xd8(%rbx),%rax
4068: 0f 95 c2 setne %dl
406b: 31 f6 xor %esi,%esi
406d: 48 29 c8 sub %rcx,%rax
};
#ifdef NET_SKBUFF_DATA_USES_OFFSET
static inline unsigned char *skb_end_pointer(const struct sk_buff *skb)
{
return skb->head + skb->end;
4070: 83 f8 03 cmp $0x3,%eax
4073: 89 c1 mov %eax,%ecx
4075: 77 3a ja 40b1 <vxlan_build_skb+0x321>
4077: eb 31 jmp 40aa <vxlan_build_skb+0x31a>
4079: 41 39 c1 cmp %eax,%r9d
407c: 0f 82 c0 00 00 00 jb 4142 <vxlan_build_skb+0x3b2>
if (!skb->cloned)
return 0;
dataref = atomic_read(&skb_shinfo(skb)->dataref);
dataref = (dataref & SKB_DATAREF_MASK) - (dataref >> SKB_DATAREF_SHIFT);
return dataref != 1;
4082: 66 45 85 d2 test %r10w,%r10w
4086: 0f 84 ea fd ff ff je 3e76 <vxlan_build_skb+0xe6>
408c: 44 0f b7 8b a8 00 00 movzwl 0xa8(%rbx),%r9d
4093: 00
*
* Return the number of bytes of free space at the head of an &sk_buff.
*/
static inline unsigned int skb_headroom(const struct sk_buff *skb)
{
return skb->data - skb->head;
4094: 66 41 81 e7 ff ef and $0xefff,%r15w
if (!skb->cloned)
return 0;
dataref = atomic_read(&skb_shinfo(skb)->dataref);
dataref = (dataref & SKB_DATAREF_MASK) - (dataref >> SKB_DATAREF_SHIFT);
return dataref != 1;
409a: 48 2b bb d0 00 00 00 sub 0xd0(%rbx),%rdi
static inline int __skb_cow(struct sk_buff *skb, unsigned int headroom,
int cloned)
{
int delta = 0;
if (headroom > skb_headroom(skb))
40a1: 83 ff 03 cmp $0x3,%edi
*
* Return the number of bytes of free space at the head of an &sk_buff.
*/
static inline unsigned int skb_headroom(const struct sk_buff *skb)
{
return skb->data - skb->head;
40a4: 89 f9 mov %edi,%ecx
static inline int __skb_cow(struct sk_buff *skb, unsigned int headroom,
int cloned)
{
int delta = 0;
if (headroom > skb_headroom(skb))
40a6: 77 32 ja 40da <vxlan_build_skb+0x34a>
40a8: 31 d2 xor %edx,%edx
40aa: be 04 00 00 00 mov $0x4,%esi
40af: 29 ce sub %ecx,%esi
40b1: 09 f2 or %esi,%edx
* Following the skb_unshare() example, in case of error, the calling function
* doesn't have to worry about freeing the original skb.
*/
static inline struct sk_buff *vlan_hwaccel_push_inside(struct sk_buff *skb)
{
if (skb_vlan_tag_present(skb))
40b3: 74 25 je 40da <vxlan_build_skb+0x34a>
40b5: 83 c6 3f add $0x3f,%esi
40b8: 31 d2 xor %edx,%edx
40ba: b9 20 00 08 02 mov $0x2080020,%ecx
* Following the skb_unshare() example, in case of error, the calling function
* doesn't have to worry about freeing the original skb.
*/
static inline struct sk_buff *__vlan_hwaccel_push_inside(struct sk_buff *skb)
{
skb = vlan_insert_tag_set_proto(skb, skb->vlan_proto,
40bf: 83 e6 c0 and $0xffffffc0,%esi
40c2: 48 89 df mov %rbx,%rdi
40c5: 44 89 4d c8 mov %r9d,-0x38(%rbp)
40c9: e8 00 00 00 00 callq 40ce <vxlan_build_skb+0x33e>
*
* Return the number of bytes of free space at the head of an &sk_buff.
*/
static inline unsigned int skb_headroom(const struct sk_buff *skb)
{
return skb->data - skb->head;
40ce: 85 c0 test %eax,%eax
40d0: 44 8b 4d c8 mov -0x38(%rbp),%r9d
40d4: 0f 88 01 01 00 00 js 41db <vxlan_build_skb+0x44b>
int cloned)
{
int delta = 0;
if (headroom > skb_headroom(skb))
delta = headroom - skb_headroom(skb);
40da: be 04 00 00 00 mov $0x4,%esi
40df: 48 89 df mov %rbx,%rdi
if (delta || cloned)
40e2: 44 89 4d c4 mov %r9d,-0x3c(%rbp)
return pskb_expand_head(skb, ALIGN(delta, NET_SKB_PAD), 0,
40e6: e8 00 00 00 00 callq 40eb <vxlan_build_skb+0x35b>
40eb: 48 8b bb d8 00 00 00 mov 0xd8(%rbx),%rdi
40f2: ba 0c 00 00 00 mov $0xc,%edx
40f7: 48 89 45 c8 mov %rax,-0x38(%rbp)
40fb: 66 41 c1 c7 08 rol $0x8,%r15w
static inline int __vlan_insert_tag(struct sk_buff *skb,
__be16 vlan_proto, u16 vlan_tci)
{
struct vlan_ethhdr *veth;
if (skb_cow_head(skb, VLAN_HLEN) < 0)
4100: 48 8d 77 04 lea 0x4(%rdi),%rsi
4104: e8 00 00 00 00 callq 4109 <vxlan_build_skb+0x379>
4109: 48 8b 4d c8 mov -0x38(%rbp),%rcx
return -ENOMEM;
veth = (struct vlan_ethhdr *)skb_push(skb, VLAN_HLEN);
410d: 44 8b 4d c4 mov -0x3c(%rbp),%r9d
4111: 66 83 ab c6 00 00 00 subw $0x4,0xc6(%rbx)
4118: 04
4119: 48 85 db test %rbx,%rbx
/* Move the mac addresses to the beginning of the new header. */
memmove(skb->data, skb->data + VLAN_HLEN, 2 * ETH_ALEN);
411c: 66 44 89 49 0c mov %r9w,0xc(%rcx)
4121: 66 44 89 79 0e mov %r15w,0xe(%rcx)
4126: 0f 84 bc 00 00 00 je 41e8 <vxlan_build_skb+0x458>
/* first, the ethernet type */
veth->h_vlan_proto = vlan_proto;
/* now, the TCI */
veth->h_vlan_TCI = htons(vlan_tci);
412c: 31 c9 xor %ecx,%ecx
412e: 66 44 89 8b c0 00 00 mov %r9w,0xc0(%rbx)
4135: 00
return -ENOMEM;
veth = (struct vlan_ethhdr *)skb_push(skb, VLAN_HLEN);
/* Move the mac addresses to the beginning of the new header. */
memmove(skb->data, skb->data + VLAN_HLEN, 2 * ETH_ALEN);
4136: 66 89 8b aa 00 00 00 mov %cx,0xaa(%rbx)
skb->mac_header -= VLAN_HLEN;
/* first, the ethernet type */
veth->h_vlan_proto = vlan_proto;
413d: e9 3d fd ff ff jmpq 3e7f <vxlan_build_skb+0xef>
veth = (struct vlan_ethhdr *)skb_push(skb, VLAN_HLEN);
/* Move the mac addresses to the beginning of the new header. */
memmove(skb->data, skb->data + VLAN_HLEN, 2 * ETH_ALEN);
skb->mac_header -= VLAN_HLEN;
4142: 44 29 c8 sub %r9d,%eax
4145: 31 d2 xor %edx,%edx
4147: 89 c6 mov %eax,%esi
static inline struct sk_buff *vlan_insert_tag_set_proto(struct sk_buff *skb,
__be16 vlan_proto,
u16 vlan_tci)
{
skb = vlan_insert_tag(skb, vlan_proto, vlan_tci);
if (skb)
4149: e9 16 fd ff ff jmpq 3e64 <vxlan_build_skb+0xd4>
/* Move the mac addresses to the beginning of the new header. */
memmove(skb->data, skb->data + VLAN_HLEN, 2 * ETH_ALEN);
skb->mac_header -= VLAN_HLEN;
/* first, the ethernet type */
veth->h_vlan_proto = vlan_proto;
414e: 0f b7 83 98 00 00 00 movzwl 0x98(%rbx),%eax
/* now, the TCI */
veth->h_vlan_TCI = htons(vlan_tci);
4155: 49 89 fa mov %rdi,%r10
static inline struct sk_buff *vlan_insert_tag_set_proto(struct sk_buff *skb,
__be16 vlan_proto,
u16 vlan_tci)
{
skb = vlan_insert_tag(skb, vlan_proto, vlan_tci);
if (skb)
4158: 49 29 ca sub %rcx,%r10
415b: 45 89 d1 mov %r10d,%r9d
skb->protocol = vlan_proto;
415e: 44 29 d0 sub %r10d,%eax
4161: 3d fe 00 00 00 cmp $0xfe,%eax
static inline struct sk_buff *__vlan_hwaccel_push_inside(struct sk_buff *skb)
{
skb = vlan_insert_tag_set_proto(skb, skb->vlan_proto,
skb_vlan_tag_get(skb));
if (likely(skb))
skb->vlan_tci = 0;
4166: 0f 8f 7d fc ff ff jg 3de9 <vxlan_build_skb+0x59>
416c: a8 01 test $0x1,%al
416e: 0f 85 75 fc ff ff jne 3de9 <vxlan_build_skb+0x59>
int cloned)
{
int delta = 0;
if (headroom > skb_headroom(skb))
delta = headroom - skb_headroom(skb);
4174: 0f b7 83 9a 00 00 00 movzwl 0x9a(%rbx),%eax
417b: 66 83 f8 06 cmp $0x6,%ax
}
}
static inline int skb_checksum_start_offset(const struct sk_buff *skb)
{
return skb->csum_start - skb_headroom(skb);
417f: 74 0a je 418b <vxlan_build_skb+0x3fb>
4181: 66 83 f8 10 cmp $0x10,%ax
*
* Return the number of bytes of free space at the head of an &sk_buff.
*/
static inline unsigned int skb_headroom(const struct sk_buff *skb)
{
return skb->data - skb->head;
4185: 0f 85 5e fc ff ff jne 3de9 <vxlan_build_skb+0x59>
418b: 41 81 cd 00 40 00 00 or $0x4000,%r13d
if ((vxflags & VXLAN_F_REMCSUM_TX) &&
skb->ip_summed == CHECKSUM_PARTIAL) {
int csum_start = skb_checksum_start_offset(skb);
if (csum_start <= VXLAN_MAX_REMCSUM_START &&
4192: e9 52 fc ff ff jmpq 3de9 <vxlan_build_skb+0x59>
4197: 80 a3 91 00 00 00 f9 andb $0xf9,0x91(%rbx)
419e: 80 a3 92 00 00 00 fd andb $0xfd,0x92(%rbx)
!(csum_start & VXLAN_RCO_SHIFT_MASK) &&
(skb->csum_offset == offsetof(struct udphdr, check) ||
41a5: e9 b4 fd ff ff jmpq 3f5e <vxlan_build_skb+0x1ce>
41aa: c6 40 fb 03 movb $0x3,-0x5(%rax)
if ((vxflags & VXLAN_F_REMCSUM_TX) &&
skb->ip_summed == CHECKSUM_PARTIAL) {
int csum_start = skb_checksum_start_offset(skb);
if (csum_start <= VXLAN_MAX_REMCSUM_START &&
!(csum_start & VXLAN_RCO_SHIFT_MASK) &&
41ae: 0f b7 83 c0 00 00 00 movzwl 0xc0(%rbx),%eax
41b5: 66 89 83 b8 00 00 00 mov %ax,0xb8(%rbx)
(skb->csum_offset == offsetof(struct udphdr, check) ||
skb->csum_offset == offsetof(struct tcphdr, check)))
type |= SKB_GSO_TUNNEL_REMCSUM;
41bc: 80 a3 93 00 00 00 f7 andb $0xf7,0x93(%rbx)
41c3: 48 83 c4 18 add $0x18,%rsp
start = skb_checksum_start_offset(skb) - sizeof(struct vxlanhdr);
vxh->vx_vni |= vxlan_compute_rco(start, skb->csum_offset);
vxh->vx_flags |= VXLAN_HF_RCO;
if (!skb_is_gso(skb)) {
skb->ip_summed = CHECKSUM_NONE;
41c7: 5b pop %rbx
41c8: 31 c0 xor %eax,%eax
41ca: 41 5c pop %r12
41cc: 41 5d pop %r13
skb->encapsulation = 0;
41ce: 41 5e pop %r14
41d0: 41 5f pop %r15
41d2: 5d pop %rbp
41d3: c3 retq
41d4: b8 65 58 00 00 mov $0x5865,%eax
41d9: eb da jmp 41b5 <vxlan_build_skb+0x425>
return 0;
case htons(ETH_P_IPV6):
gpe->next_protocol = VXLAN_GPE_NP_IPV6;
return 0;
case htons(ETH_P_TEB):
gpe->next_protocol = VXLAN_GPE_NP_ETHERNET;
41db: be 01 00 00 00 mov $0x1,%esi
vxlan_build_gbp_hdr(vxh, vxflags, md);
if (vxflags & VXLAN_F_GPE) {
err = vxlan_build_gpe_hdr(vxh, vxflags, skb->protocol);
if (err < 0)
goto out_free;
inner_protocol = skb->protocol;
41e0: 48 89 df mov %rbx,%rdi
41e3: e8 00 00 00 00 callq 41e8 <vxlan_build_skb+0x458>
#define ENCAP_TYPE_IPPROTO 1
static inline void skb_set_inner_protocol(struct sk_buff *skb,
__be16 protocol)
{
skb->inner_protocol = protocol;
41e8: be d5 06 00 00 mov $0x6d5,%esi
skb->inner_protocol_type = ENCAP_TYPE_ETHER;
41ed: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
return 0;
out_free:
kfree_skb(skb);
return err;
}
41f4: e8 00 00 00 00 callq 41f9 <vxlan_build_skb+0x469>
goto out_free;
inner_protocol = skb->protocol;
}
skb_set_inner_protocol(skb, inner_protocol);
return 0;
41f9: 48 83 c4 18 add $0x18,%rsp
out_free:
kfree_skb(skb);
return err;
}
41fd: b8 f4 ff ff ff mov $0xfffffff4,%eax
4202: 5b pop %rbx
4203: 41 5c pop %r12
{
struct vxlanhdr *vxh;
int min_headroom;
int err;
int type = udp_sum ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL;
__be16 inner_protocol = htons(ETH_P_TEB);
4205: 41 5d pop %r13
4207: 41 5e pop %r14
4209: 41 5f pop %r15
__dev_kfree_skb_irq(skb, SKB_REASON_CONSUMED);
}
static inline void dev_kfree_skb_any(struct sk_buff *skb)
{
__dev_kfree_skb_any(skb, SKB_REASON_DROPPED);
420b: 5d pop %rbp
420c: c3 retq
420d: c6 40 fb 02 movb $0x2,-0x5(%rax)
4211: eb 9b jmp 41ae <vxlan_build_skb+0x41e>
4213: 48 8b bb d8 00 00 00 mov 0xd8(%rbx),%rdi
err = skb_cow_head(skb, min_headroom);
if (unlikely(err))
goto out_free;
skb = vlan_hwaccel_push_inside(skb);
if (WARN_ON(!skb))
421a: e9 7b fe ff ff jmpq 409a <vxlan_build_skb+0x30a>
421f: 90 nop
0000000000004220 <vxlan_xmit_one>:
4220: e8 00 00 00 00 callq 4225 <vxlan_xmit_one+0x5>
4225: 55 push %rbp
4226: 49 89 d3 mov %rdx,%r11
return 0;
out_free:
kfree_skb(skb);
return err;
}
4229: 4c 8d 96 40 08 00 00 lea 0x840(%rsi),%r10
if (unlikely(err))
goto out_free;
skb = vlan_hwaccel_push_inside(skb);
if (WARN_ON(!skb))
return -ENOMEM;
4230: 48 89 e5 mov %rsp,%rbp
return 0;
out_free:
kfree_skb(skb);
return err;
}
4233: 41 57 push %r15
4235: 41 56 push %r14
4237: 41 55 push %r13
4239: 41 54 push %r12
423b: 49 89 fd mov %rdi,%r13
switch (protocol) {
case htons(ETH_P_IP):
gpe->next_protocol = VXLAN_GPE_NP_IPV4;
return 0;
case htons(ETH_P_IPV6):
gpe->next_protocol = VXLAN_GPE_NP_IPV6;
423e: 53 push %rbx
423f: 48 89 f3 mov %rsi,%rbx
4242: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
4246: 48 81 ec f0 00 00 00 sub $0xf0,%rsp
424d: 4c 8b 67 58 mov 0x58(%rdi),%r12
}
}
static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
struct vxlan_rdst *rdst, bool did_rsc)
{
4251: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
4258: 00 00
*
* Get network device private data
*/
static inline void *netdev_priv(const struct net_device *dev)
{
return (char *)dev + ALIGN(sizeof(struct net_device), NETDEV_ALIGN);
425a: 48 89 84 24 e8 00 00 mov %rax,0xe8(%rsp)
4261: 00
4262: 31 c0 xor %eax,%eax
4264: 8b 86 d8 08 00 00 mov 0x8d8(%rsi),%eax
426a: 49 83 e4 fe and $0xfffffffffffffffe,%r12
426e: 89 44 24 64 mov %eax,0x64(%rsp)
4272: 48 8b 86 70 08 00 00 mov 0x870(%rsi),%rax
4279: 48 8b 80 80 04 00 00 mov 0x480(%rax),%rax
static inline struct metadata_dst *skb_metadata_dst(struct sk_buff *skb)
{
struct metadata_dst *md_dst = (struct metadata_dst *) skb_dst(skb);
if (md_dst && md_dst->dst.flags & DST_METADATA)
4280: 48 89 44 24 70 mov %rax,0x70(%rsp)
4285: 48 8b 86 78 08 00 00 mov 0x878(%rsi),%rax
428c: 48 89 44 24 78 mov %rax,0x78(%rsp)
4291: 0f 84 99 08 00 00 je 4b30 <vxlan_xmit_one+0x910>
__be16 src_port = 0, dst_port;
__be32 vni, label;
__be16 df = 0;
__u8 tos, ttl;
int err;
u32 flags = vxlan->flags;
4297: 41 f6 44 24 61 02 testb $0x2,0x61(%r12)
429d: 0f 84 9b 03 00 00 je 463e <vxlan_xmit_one+0x41e>
42a3: 49 81 c4 a0 00 00 00 add $0xa0,%r12
42aa: 4d 85 db test %r11,%r11
42ad: 0f 84 a9 03 00 00 je 465c <vxlan_xmit_one+0x43c>
42b3: 41 0f b7 43 1c movzwl 0x1c(%r11),%eax
bool udp_sum = false;
bool xnet = !net_eq(vxlan->net, dev_net(vxlan->dev));
42b8: 66 85 c0 test %ax,%ax
42bb: 66 89 84 24 84 00 00 mov %ax,0x84(%rsp)
42c2: 00
42c3: 0f 84 51 03 00 00 je 461a <vxlan_xmit_one+0x3fa>
42c9: 41 8b 43 20 mov 0x20(%r11),%eax
42cd: 4d 8d 73 48 lea 0x48(%r11),%r14
42d1: 4d 89 df mov %r11,%r15
{
struct metadata_dst *md_dst = skb_metadata_dst(skb);
struct dst_entry *dst;
if (md_dst)
return &md_dst->u.tun_info;
42d4: 89 44 24 50 mov %eax,0x50(%rsp)
42d8: 48 8d 83 54 09 00 00 lea 0x954(%rbx),%rax
info = skb_tunnel_info(skb);
if (rdst) {
42df: 48 89 44 24 68 mov %rax,0x68(%rsp)
dst_port = rdst->remote_port ? rdst->remote_port : vxlan->cfg.dst_port;
42e4: 41 0f b7 03 movzwl (%r11),%eax
42e8: 66 83 f8 0a cmp $0xa,%ax
42ec: 0f 84 3c 03 00 00 je 462e <vxlan_xmit_one+0x40e>
42f2: 41 8b 77 04 mov 0x4(%r15),%esi
42f6: 85 f6 test %esi,%esi
42f8: 0f 94 c2 sete %dl
vni = rdst->remote_vni;
42fb: 84 d2 test %dl,%dl
dst = &rdst->remote_ip;
src = &vxlan->cfg.saddr;
dst_cache = &rdst->dst_cache;
42fd: 74 3e je 433d <vxlan_xmit_one+0x11d>
42ff: 84 c9 test %cl,%cl
info = skb_tunnel_info(skb);
if (rdst) {
dst_port = rdst->remote_port ? rdst->remote_port : vxlan->cfg.dst_port;
vni = rdst->remote_vni;
dst = &rdst->remote_ip;
4301: 0f 85 d1 04 00 00 jne 47d8 <vxlan_xmit_one+0x5b8>
info = skb_tunnel_info(skb);
if (rdst) {
dst_port = rdst->remote_port ? rdst->remote_port : vxlan->cfg.dst_port;
vni = rdst->remote_vni;
4307: 48 83 83 68 01 00 00 addq $0x1,0x168(%rbx)
430e: 01
dst = &rdst->remote_ip;
src = &vxlan->cfg.saddr;
430f: 4c 89 ef mov %r13,%rdi
4312: e8 00 00 00 00 callq 4317 <vxlan_xmit_one+0xf7>
4317: 48 8b 84 24 e8 00 00 mov 0xe8(%rsp),%rax
431e: 00
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
static inline bool vxlan_addr_any(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
431f: 65 48 33 04 25 28 00 xor %gs:0x28,%rax
4326: 00 00
return ipv6_addr_any(&ipa->sin6.sin6_addr);
else
return ipa->sin.sin_addr.s_addr == htonl(INADDR_ANY);
4328: 0f 85 da 09 00 00 jne 4d08 <vxlan_xmit_one+0xae8>
dst = &remote_ip;
src = &local_ip;
dst_cache = &info->dst_cache;
}
if (vxlan_addr_any(dst)) {
432e: 48 8d 65 d8 lea -0x28(%rbp),%rsp
if (did_rsc) {
4332: 5b pop %rbx
4333: 41 5c pop %r12
4335: 41 5d pop %r13
}
return;
drop:
dev->stats.tx_dropped++;
4337: 41 5e pop %r14
4339: 41 5f pop %r15
433b: 5d pop %rbp
433c: c3 retq
433d: 0f b6 bb 83 09 00 00 movzbl 0x983(%rbx),%edi
rt_tx_error:
ip_rt_put(rt);
tx_error:
dev->stats.tx_errors++;
tx_free:
dev_kfree_skb(skb);
4344: 41 0f b7 95 c4 00 00 movzwl 0xc4(%r13),%edx
434b: 00
}
434c: 49 03 95 d0 00 00 00 add 0xd0(%r13),%rdx
4353: 40 84 ff test %dil,%dil
4356: 40 88 bc 24 87 00 00 mov %dil,0x87(%rsp)
435d: 00
435e: 48 89 54 24 38 mov %rdx,0x38(%rsp)
4363: 75 22 jne 4387 <vxlan_xmit_one+0x167>
4365: 66 83 f8 0a cmp $0xa,%ax
4369: 0f 84 71 03 00 00 je 46e0 <vxlan_xmit_one+0x4c0>
goto drop;
}
old_iph = ip_hdr(skb);
ttl = vxlan->cfg.ttl;
436f: 41 8b 47 04 mov 0x4(%r15),%eax
4373: 25 f0 00 00 00 and $0xf0,%eax
skb->transport_header += offset;
}
static inline unsigned char *skb_network_header(const struct sk_buff *skb)
{
return skb->head + skb->network_header;
4378: 3d e0 00 00 00 cmp $0xe0,%eax
437d: 0f 94 c0 sete %al
4380: 88 84 24 87 00 00 00 mov %al,0x87(%rsp)
4387: 0f b6 83 82 09 00 00 movzbl 0x982(%rbx),%eax
438e: 3c 01 cmp $0x1,%al
4390: 88 84 24 88 00 00 00 mov %al,0x88(%rsp)
return ipa->sin.sin_addr.s_addr == htonl(INADDR_ANY);
}
static inline bool vxlan_addr_multicast(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
4397: 0f 84 a1 04 00 00 je 483e <vxlan_xmit_one+0x61e>
439d: 8b 83 84 09 00 00 mov 0x984(%rbx),%eax
return ipv6_addr_is_multicast(&ipa->sin6.sin6_addr);
else
return IN_MULTICAST(ntohl(ipa->sin.sin_addr.s_addr));
43a3: 0f b7 93 80 09 00 00 movzwl 0x980(%rbx),%edx
43aa: 48 8b bb 80 04 00 00 mov 0x480(%rbx),%rdi
43b1: 89 84 24 80 00 00 00 mov %eax,0x80(%rsp)
ttl = vxlan->cfg.ttl;
if (!ttl && vxlan_addr_multicast(dst))
ttl = 1;
tos = vxlan->cfg.tos;
43b8: 0f b7 83 7e 09 00 00 movzwl 0x97e(%rbx),%eax
if (tos == 1)
43bf: 89 94 24 9c 00 00 00 mov %edx,0x9c(%rsp)
ttl = vxlan->cfg.ttl;
if (!ttl && vxlan_addr_multicast(dst))
ttl = 1;
tos = vxlan->cfg.tos;
43c6: 39 c2 cmp %eax,%edx
if (tos == 1)
43c8: 89 84 24 98 00 00 00 mov %eax,0x98(%rsp)
tos = ip_tunnel_get_dsfield(old_iph, skb);
label = vxlan->cfg.label;
43cf: 0f 8e 3b 04 00 00 jle 4810 <vxlan_xmit_one+0x5f0>
src_port = udp_flow_src_port(dev_net(dev), skb, vxlan->cfg.port_min,
43d5: 41 f6 85 91 00 00 00 testb $0x30,0x91(%r13)
43dc: 30
43dd: 0f 84 0c 04 00 00 je 47ef <vxlan_xmit_one+0x5cf>
tos = vxlan->cfg.tos;
if (tos == 1)
tos = ip_tunnel_get_dsfield(old_iph, skb);
label = vxlan->cfg.label;
43e3: 41 8b 85 a4 00 00 00 mov 0xa4(%r13),%eax
src_port = udp_flow_src_port(dev_net(dev), skb, vxlan->cfg.port_min,
43ea: 85 c0 test %eax,%eax
43ec: 0f 85 c0 00 00 00 jne 44b2 <vxlan_xmit_one+0x292>
43f2: 49 8b b5 d8 00 00 00 mov 0xd8(%r13),%rsi
static inline __be16 udp_flow_src_port(struct net *net, struct sk_buff *skb,
int min, int max, bool use_eth)
{
u32 hash;
if (min >= max) {
43f9: 41 0f b7 8d c0 00 00 movzwl 0xc0(%r13),%ecx
4400: 00
4401: 0f b6 56 08 movzbl 0x8(%rsi),%edx
data, proto, nhoff, hlen, flags);
}
static inline __u32 skb_get_hash(struct sk_buff *skb)
{
if (!skb->l4_hash && !skb->sw_hash)
4405: 0f b6 46 0b movzbl 0xb(%rsi),%eax
4409: 81 e9 05 41 52 21 sub $0x21524105,%ecx
440f: 44 0f b6 46 04 movzbl 0x4(%rsi),%r8d
__skb_get_hash(skb);
return skb->hash;
4414: 8d 3c 0a lea (%rdx,%rcx,1),%edi
4417: c1 e0 18 shl $0x18,%eax
/* Use default range */
inet_get_local_port_range(net, &min, &max);
}
hash = skb_get_hash(skb);
if (unlikely(!hash)) {
441a: 45 8d 0c 08 lea (%r8,%rcx,1),%r9d
441e: 8d 14 38 lea (%rax,%rdi,1),%edx
4421: 0f b6 46 0a movzbl 0xa(%rsi),%eax
if (use_eth) {
/* Can't find a normal hash, caller has indicated an
* Ethernet packet so use that to compute a hash.
*/
hash = jhash(skb->data, 2 * ETH_ALEN,
4425: 0f b6 7e 07 movzbl 0x7(%rsi),%edi
{
u32 a, b, c;
const u8 *k = key;
/* Set up the internal state */
a = b = c = JHASH_INITVAL + length + initval;
4429: c1 e0 10 shl $0x10,%eax
442c: c1 e7 18 shl $0x18,%edi
442f: 01 d0 add %edx,%eax
4431: 0f b6 56 06 movzbl 0x6(%rsi),%edx
4435: 46 8d 04 0f lea (%rdi,%r9,1),%r8d
4439: c1 e2 10 shl $0x10,%edx
443c: 41 8d 3c 10 lea (%r8,%rdx,1),%edi
4440: 0f b6 56 05 movzbl 0x5(%rsi),%edx
4444: 44 0f b6 06 movzbl (%rsi),%r8d
4448: c1 e2 08 shl $0x8,%edx
444b: 44 01 c1 add %r8d,%ecx
444e: 01 fa add %edi,%edx
4450: 0f b6 7e 03 movzbl 0x3(%rsi),%edi
4454: c1 e7 18 shl $0x18,%edi
4457: 44 8d 04 39 lea (%rcx,%rdi,1),%r8d
445b: 0f b6 7e 02 movzbl 0x2(%rsi),%edi
445f: 0f b6 4e 01 movzbl 0x1(%rsi),%ecx
4463: 0f b6 76 09 movzbl 0x9(%rsi),%esi
4467: c1 e7 10 shl $0x10,%edi
446a: c1 e1 08 shl $0x8,%ecx
446d: c1 e6 08 shl $0x8,%esi
case 10: c += (u32)k[9]<<8;
case 9: c += k[8];
case 8: b += (u32)k[7]<<24;
case 7: b += (u32)k[6]<<16;
case 6: b += (u32)k[5]<<8;
case 5: b += k[4];
4470: 44 01 c7 add %r8d,%edi
4473: 01 f0 add %esi,%eax
case 4: a += (u32)k[3]<<24;
case 3: a += (u32)k[2]<<16;
case 2: a += (u32)k[1]<<8;
case 1: a += k[0];
4475: 89 d6 mov %edx,%esi
4477: 01 f9 add %edi,%ecx
case 10: c += (u32)k[9]<<8;
case 9: c += k[8];
case 8: b += (u32)k[7]<<24;
case 7: b += (u32)k[6]<<16;
case 6: b += (u32)k[5]<<8;
case 5: b += k[4];
4479: 31 d0 xor %edx,%eax
case 4: a += (u32)k[3]<<24;
case 3: a += (u32)k[2]<<16;
case 2: a += (u32)k[1]<<8;
case 1: a += k[0];
447b: c1 c6 0e rol $0xe,%esi
case 10: c += (u32)k[9]<<8;
case 9: c += k[8];
case 8: b += (u32)k[7]<<24;
case 7: b += (u32)k[6]<<16;
case 6: b += (u32)k[5]<<8;
case 5: b += k[4];
447e: 29 f0 sub %esi,%eax
case 4: a += (u32)k[3]<<24;
case 3: a += (u32)k[2]<<16;
case 2: a += (u32)k[1]<<8;
case 1: a += k[0];
4480: 89 c6 mov %eax,%esi
4482: 31 c1 xor %eax,%ecx
4484: c1 c6 0b rol $0xb,%esi
4487: 29 f1 sub %esi,%ecx
4489: 89 ce mov %ecx,%esi
448b: 31 ca xor %ecx,%edx
448d: c1 ce 07 ror $0x7,%esi
4490: 29 f2 sub %esi,%edx
4492: 89 d6 mov %edx,%esi
__jhash_final(a, b, c);
4494: 31 d0 xor %edx,%eax
4496: c1 c6 10 rol $0x10,%esi
case 6: b += (u32)k[5]<<8;
case 5: b += k[4];
case 4: a += (u32)k[3]<<24;
case 3: a += (u32)k[2]<<16;
case 2: a += (u32)k[1]<<8;
case 1: a += k[0];
4499: 29 f0 sub %esi,%eax
449b: 89 c6 mov %eax,%esi
__jhash_final(a, b, c);
449d: 31 c1 xor %eax,%ecx
449f: c1 c6 04 rol $0x4,%esi
case 6: b += (u32)k[5]<<8;
case 5: b += k[4];
case 4: a += (u32)k[3]<<24;
case 3: a += (u32)k[2]<<16;
case 2: a += (u32)k[1]<<8;
case 1: a += k[0];
44a2: 29 f1 sub %esi,%ecx
__jhash_final(a, b, c);
44a4: 31 ca xor %ecx,%edx
44a6: c1 c1 0e rol $0xe,%ecx
44a9: 29 ca sub %ecx,%edx
44ab: 31 d0 xor %edx,%eax
44ad: c1 ca 08 ror $0x8,%edx
44b0: 29 d0 sub %edx,%eax
44b2: 8b b4 24 98 00 00 00 mov 0x98(%rsp),%esi
44b9: 8b 8c 24 9c 00 00 00 mov 0x9c(%rsp),%ecx
44c0: 89 c2 mov %eax,%edx
44c2: c1 e2 10 shl $0x10,%edx
44c5: 31 d0 xor %edx,%eax
44c7: 29 f1 sub %esi,%ecx
44c9: 48 63 c9 movslq %ecx,%rcx
44cc: 48 0f af c8 imul %rax,%rcx
44d0: 48 c1 e9 20 shr $0x20,%rcx
44d4: 01 ce add %ecx,%esi
44d6: 66 c1 c6 08 rol $0x8,%si
44da: 4d 85 e4 test %r12,%r12
44dd: 66 89 74 24 62 mov %si,0x62(%rsp)
* attacker is leaked. Only upper 16 bits are relevant in the
* computation for 16 bit port value.
*/
hash ^= hash << 16;
return htons((((u64) hash * (max - min)) >> 32) + min);
44e2: 0f 84 7a 05 00 00 je 4a62 <vxlan_xmit_one+0x842>
44e8: 41 0f b6 44 24 2b movzbl 0x2b(%r12),%eax
44ee: 48 8d 94 24 94 00 00 lea 0x94(%rsp),%rdx
44f5: 00
44f6: 88 84 24 87 00 00 00 mov %al,0x87(%rsp)
44fd: 41 0f b6 44 24 2a movzbl 0x2a(%r12),%eax
4503: 88 84 24 88 00 00 00 mov %al,0x88(%rsp)
vxlan->cfg.port_max, true);
if (info) {
450a: 41 8b 44 24 2c mov 0x2c(%r12),%eax
450f: 89 84 24 80 00 00 00 mov %eax,0x80(%rsp)
4516: 41 0f b6 44 24 29 movzbl 0x29(%r12),%eax
ttl = info->key.ttl;
451c: 83 e0 01 and $0x1,%eax
}
}
static inline void *ip_tunnel_info_opts(struct ip_tunnel_info *info)
{
return info + 1;
451f: 41 80 7c 24 48 00 cmpb $0x0,0x48(%r12)
4525: 88 44 24 48 mov %al,0x48(%rsp)
4529: 49 8d 44 24 50 lea 0x50(%r12),%rax
tos = info->key.tos;
452e: 48 0f 44 c2 cmove %rdx,%rax
4532: 48 89 44 24 40 mov %rax,0x40(%rsp)
4537: 48 8b 7c 24 70 mov 0x70(%rsp),%rdi
label = info->key.label;
453c: 48 39 7c 24 78 cmp %rdi,0x78(%rsp)
4541: 0f 95 44 24 70 setne 0x70(%rsp)
udp_sum = !!(info->key.tun_flags & TUNNEL_CSUM);
4546: 66 41 83 3f 02 cmpw $0x2,(%r15)
454b: 0f 84 2e 03 00 00 je 487f <vxlan_xmit_one+0x65f>
4551: 48 8b 83 68 08 00 00 mov 0x868(%rbx),%rax
4558: 48 85 c0 test %rax,%rax
455b: 0f 84 a6 fd ff ff je 4307 <vxlan_xmit_one+0xe7>
4561: 48 8b 40 10 mov 0x10(%rax),%rax
4565: 0f b6 8c 24 88 00 00 movzbl 0x88(%rsp),%ecx
456c: 00
__be16 df = 0;
__u8 tos, ttl;
int err;
u32 flags = vxlan->flags;
bool udp_sum = false;
bool xnet = !net_eq(vxlan->net, dev_net(vxlan->dev));
456d: 48 8b 40 20 mov 0x20(%rax),%rax
4571: 48 89 44 24 30 mov %rax,0x30(%rsp)
md = ip_tunnel_info_opts(info);
} else {
md->gbp = skb->mark;
}
if (dst->sa.sa_family == AF_INET) {
4576: 48 8b 44 24 68 mov 0x68(%rsp),%rax
457b: 48 83 c0 08 add $0x8,%rax
457f: 4d 85 db test %r11,%r11
#if IS_ENABLED(CONFIG_IPV6)
} else {
struct dst_entry *ndst;
u32 rt6i_flags;
if (!vxlan->vn6_sock)
4582: 48 89 44 24 78 mov %rax,0x78(%rsp)
4587: 49 8d 47 08 lea 0x8(%r15),%rax
458b: 48 89 44 24 68 mov %rax,0x68(%rsp)
4590: 0f 84 93 05 00 00 je 4b29 <vxlan_xmit_one+0x909>
goto drop;
sk = vxlan->vn6_sock->sock->sk;
ndst = vxlan6_get_route(vxlan, skb,
4596: 41 8b 53 24 mov 0x24(%r11),%edx
459a: 48 8b 44 24 78 mov 0x78(%rsp),%rax
struct dst_entry *ndst;
u32 rt6i_flags;
if (!vxlan->vn6_sock)
goto drop;
sk = vxlan->vn6_sock->sock->sk;
459f: 44 8b 84 24 80 00 00 mov 0x80(%rsp),%r8d
45a6: 00
ndst = vxlan6_get_route(vxlan, skb,
45a7: 4d 8d 4f 08 lea 0x8(%r15),%r9
45ab: 4c 89 74 24 08 mov %r14,0x8(%rsp)
45b0: 4c 89 64 24 10 mov %r12,0x10(%rsp)
45b5: 4c 89 ee mov %r13,%rsi
rdst ? rdst->remote_ifindex : 0, tos,
label, &dst->sin6.sin6_addr,
45b8: 4c 89 d7 mov %r10,%rdi
45bb: 48 89 04 24 mov %rax,(%rsp)
45bf: e8 ac c4 ff ff callq a70 <vxlan6_get_route>
if (!vxlan->vn6_sock)
goto drop;
sk = vxlan->vn6_sock->sock->sk;
ndst = vxlan6_get_route(vxlan, skb,
45c4: 48 3d 00 f0 ff ff cmp $0xfffffffffffff000,%rax
45ca: 49 89 c6 mov %rax,%r14
45cd: 0f 87 2c 06 00 00 ja 4bff <vxlan_xmit_one+0x9df>
45d3: 48 3b 58 18 cmp 0x18(%rax),%rbx
45d7: 0f 84 90 05 00 00 je 4b6d <vxlan_xmit_one+0x94d>
45dd: 4d 85 e4 test %r12,%r12
45e0: 8b 80 14 01 00 00 mov 0x114(%rax),%eax
45e6: 0f 85 06 01 00 00 jne 46f2 <vxlan_xmit_one+0x4d2>
45ec: 89 c2 mov %eax,%edx
45ee: c1 ea 1f shr $0x1f,%edx
45f1: 84 d2 test %dl,%dl
45f3: 0f 84 f9 00 00 00 je 46f2 <vxlan_xmit_one+0x4d2>
rdst ? rdst->remote_ifindex : 0, tos,
label, &dst->sin6.sin6_addr,
&src->sin6.sin6_addr,
dst_cache, info);
if (IS_ERR(ndst)) {
45f9: a9 00 00 00 30 test $0x30000000,%eax
45fe: 0f 84 b0 05 00 00 je 4bb4 <vxlan_xmit_one+0x994>
&dst->sin6.sin6_addr);
dev->stats.tx_carrier_errors++;
goto tx_error;
}
if (ndst->dev == dev) {
4604: 8b 44 24 64 mov 0x64(%rsp),%eax
4608: c1 e8 07 shr $0x7,%eax
460b: 83 f0 01 xor $0x1,%eax
goto tx_error;
}
/* Bypass encapsulation if the destination is local */
rt6i_flags = ((struct rt6_info *)ndst)->rt6i_flags;
if (!info && rt6i_flags & RTF_LOCAL &&
460e: 83 e0 01 and $0x1,%eax
dev->stats.collisions++;
goto tx_error;
}
/* Bypass encapsulation if the destination is local */
rt6i_flags = ((struct rt6_info *)ndst)->rt6i_flags;
4611: 88 44 24 48 mov %al,0x48(%rsp)
4615: e9 e1 00 00 00 jmpq 46fb <vxlan_xmit_one+0x4db>
if (!info && rt6i_flags & RTF_LOCAL &&
461a: 0f b7 83 7c 09 00 00 movzwl 0x97c(%rbx),%eax
4621: 66 89 84 24 84 00 00 mov %ax,0x84(%rsp)
4628: 00
4629: e9 9b fc ff ff jmpq 42c9 <vxlan_xmit_one+0xa9>
462e: 49 8b 57 08 mov 0x8(%r15),%rdx
4632: 49 0b 57 10 or 0x10(%r15),%rdx
vxlan_encap_bypass(skb, vxlan, dst_vxlan);
return;
}
if (!info)
udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM6_TX);
4636: 0f 94 c2 sete %dl
4639: e9 bd fc ff ff jmpq 42fb <vxlan_xmit_one+0xdb>
463e: 4d 8b a4 24 90 00 00 mov 0x90(%r12),%r12
4645: 00
4646: 4d 85 e4 test %r12,%r12
4649: 0f 84 e1 04 00 00 je 4b30 <vxlan_xmit_one+0x910>
bool xnet = !net_eq(vxlan->net, dev_net(vxlan->dev));
info = skb_tunnel_info(skb);
if (rdst) {
dst_port = rdst->remote_port ? rdst->remote_port : vxlan->cfg.dst_port;
464f: 49 83 c4 1c add $0x1c,%r12
4653: 4d 85 db test %r11,%r11
4656: 0f 85 57 fc ff ff jne 42b3 <vxlan_xmit_one+0x93>
465c: 41 0f b7 44 24 32 movzwl 0x32(%r12),%eax
static inline bool ipv6_addr_any(const struct in6_addr *a)
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
const unsigned long *ul = (const unsigned long *)a;
return (ul[0] | ul[1]) == 0UL;
4662: 66 85 c0 test %ax,%ax
4665: 66 89 84 24 84 00 00 mov %ax,0x84(%rsp)
466c: 00
466d: 75 0f jne 467e <vxlan_xmit_one+0x45e>
dst = skb_dst(skb);
if (dst && dst->lwtstate)
466f: 0f b7 83 7c 09 00 00 movzwl 0x97c(%rbx),%eax
4676: 66 89 84 24 84 00 00 mov %ax,0x84(%rsp)
467d: 00
467e: 49 8b 04 24 mov (%r12),%rax
info->options_len = len;
}
static inline struct ip_tunnel_info *lwt_tun_info(struct lwtunnel_state *lwtstate)
{
return (struct ip_tunnel_info *)lwtstate->data;
4682: 48 c1 e8 20 shr $0x20,%rax
bool udp_sum = false;
bool xnet = !net_eq(vxlan->net, dev_net(vxlan->dev));
info = skb_tunnel_info(skb);
if (rdst) {
4686: 41 f6 44 24 49 02 testb $0x2,0x49(%r12)
if (!info) {
WARN_ONCE(1, "%s: Missing encapsulation instructions\n",
dev->name);
goto drop;
}
dst_port = info->key.tp_dst ? : vxlan->cfg.dst_port;
468c: 48 89 44 24 50 mov %rax,0x50(%rsp)
4691: 0f 85 80 03 00 00 jne 4a17 <vxlan_xmit_one+0x7f7>
4697: 41 8b 44 24 0c mov 0xc(%r12),%eax
469c: ba 02 00 00 00 mov $0x2,%edx
46a1: 66 89 94 24 a0 00 00 mov %dx,0xa0(%rsp)
46a8: 00
46a9: 89 84 24 a4 00 00 00 mov %eax,0xa4(%rsp)
static inline __be32 vxlan_tun_id_to_vni(__be64 tun_id)
{
#if defined(__BIG_ENDIAN)
return (__force __be32)tun_id;
#else
return (__force __be32)((__force u64)tun_id >> 32);
46b0: 41 8b 44 24 08 mov 0x8(%r12),%eax
46b5: 89 84 24 c4 00 00 00 mov %eax,0xc4(%rsp)
46bc: b8 02 00 00 00 mov $0x2,%eax
}
static inline unsigned short ip_tunnel_info_af(const struct ip_tunnel_info
*tun_info)
{
return tun_info->mode & IP_TUNNEL_INFO_IPV6 ? AF_INET6 : AF_INET;
46c1: 48 8d bc 24 c0 00 00 lea 0xc0(%rsp),%rdi
46c8: 00
vni = vxlan_tun_id_to_vni(info->key.tun_id);
remote_ip.sa.sa_family = ip_tunnel_info_af(info);
if (remote_ip.sa.sa_family == AF_INET) {
remote_ip.sin.sin_addr.s_addr = info->key.u.ipv4.dst;
46c9: 4d 8d 74 24 38 lea 0x38(%r12),%r14
dev->name);
goto drop;
}
dst_port = info->key.tp_dst ? : vxlan->cfg.dst_port;
vni = vxlan_tun_id_to_vni(info->key.tun_id);
remote_ip.sa.sa_family = ip_tunnel_info_af(info);
46ce: 4c 8d bc 24 a0 00 00 lea 0xa0(%rsp),%r15
46d5: 00
46d6: 48 89 7c 24 68 mov %rdi,0x68(%rsp)
if (remote_ip.sa.sa_family == AF_INET) {
remote_ip.sin.sin_addr.s_addr = info->key.u.ipv4.dst;
46db: e9 08 fc ff ff jmpq 42e8 <vxlan_xmit_one+0xc8>
local_ip.sin.sin_addr.s_addr = info->key.u.ipv4.src;
46e0: 41 0f b6 47 08 movzbl 0x8(%r15),%eax
46e5: 3d ff 00 00 00 cmp $0xff,%eax
46ea: 0f 94 c0 sete %al
46ed: e9 8e fc ff ff jmpq 4380 <vxlan_xmit_one+0x160>
} else {
remote_ip.sin6.sin6_addr = info->key.u.ipv6.dst;
local_ip.sin6.sin6_addr = info->key.u.ipv6.src;
}
dst = &remote_ip;
src = &local_ip;
46f2: 4d 85 e4 test %r12,%r12
46f5: 0f 84 09 ff ff ff je 4604 <vxlan_xmit_one+0x3e4>
dst_cache = &info->dst_cache;
46fb: 41 0f b7 85 c0 00 00 movzwl 0xc0(%r13),%eax
4702: 00
local_ip.sin.sin_addr.s_addr = info->key.u.ipv4.src;
} else {
remote_ip.sin6.sin6_addr = info->key.u.ipv6.dst;
local_ip.sin6.sin6_addr = info->key.u.ipv6.src;
}
dst = &remote_ip;
4703: 66 83 f8 08 cmp $0x8,%ax
src = &local_ip;
4707: 0f 84 8e 03 00 00 je 4a9b <vxlan_xmit_one+0x87b>
470d: 44 0f b6 bc 24 88 00 movzbl 0x88(%rsp),%r15d
4714: 00 00
return (a->s6_addr32[0] & htonl(0xfffffff0)) == htonl(0x20010010);
}
static inline bool ipv6_addr_is_multicast(const struct in6_addr *addr)
{
return (addr->s6_addr32[0] & htonl(0xFF000000)) == htonl(0xFF000000);
4716: 45 31 e4 xor %r12d,%r12d
4719: 41 83 e7 fc and $0xfffffffc,%r15d
471d: 66 3d 86 dd cmp $0xdd86,%ax
4721: 0f 84 60 04 00 00 je 4b87 <vxlan_xmit_one+0x967>
goto tx_error;
vxlan_encap_bypass(skb, vxlan, dst_vxlan);
return;
}
if (!info)
4727: 80 bc 24 87 00 00 00 cmpb $0x0,0x87(%rsp)
472e: 00
472f: 0f 84 52 03 00 00 je 4a87 <vxlan_xmit_one+0x867>
/* Extract dsfield from inner protocol */
static inline u8 ip_tunnel_get_dsfield(const struct iphdr *iph,
const struct sk_buff *skb)
{
if (skb->protocol == htons(ETH_P_IP))
4735: 0f b6 74 24 70 movzbl 0x70(%rsp),%esi
473a: 4c 89 ef mov %r13,%rdi
* ECN codepoint of the outside header to ECT(0) if the ECN codepoint of
* the inside header is CE.
*/
static inline __u8 INET_ECN_encapsulate(__u8 outer, __u8 inner)
{
outer &= ~INET_ECN_MASK;
473d: e8 00 00 00 00 callq 4742 <vxlan_xmit_one+0x522>
4742: 0f b6 44 24 48 movzbl 0x48(%rsp),%eax
4747: 44 8b 4c 24 64 mov 0x64(%rsp),%r9d
474c: ba 28 00 00 00 mov $0x28,%edx
return iph->tos;
else if (skb->protocol == htons(ETH_P_IPV6))
4751: 4c 8b 44 24 40 mov 0x40(%rsp),%r8
4756: 8b 4c 24 50 mov 0x50(%rsp),%ecx
udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM6_TX);
tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
ttl = ttl ? : ip6_dst_hoplimit(ndst);
475a: 4c 89 f6 mov %r14,%rsi
475d: 4c 89 ef mov %r13,%rdi
4760: 89 04 24 mov %eax,(%rsp)
4763: e8 28 f6 ff ff callq 3d90 <vxlan_build_skb>
skb_scrub_packet(skb, xnet);
4768: 85 c0 test %eax,%eax
476a: 0f 88 58 03 00 00 js 4ac8 <vxlan_xmit_one+0x8a8>
4770: 0f b6 44 24 48 movzbl 0x48(%rsp),%eax
err = vxlan_build_skb(skb, ndst, sizeof(struct ipv6hdr),
4775: 0f b7 74 24 62 movzwl 0x62(%rsp),%esi
477a: 45 09 fc or %r15d,%r12d
477d: 0f b7 8c 24 84 00 00 movzwl 0x84(%rsp),%ecx
4784: 00
4785: 4c 8b 4c 24 68 mov 0x68(%rsp),%r9
478a: 45 0f b6 e4 movzbl %r12b,%r12d
478e: 4c 8b 44 24 78 mov 0x78(%rsp),%r8
4793: 44 89 24 24 mov %r12d,(%rsp)
4797: 4c 89 ea mov %r13,%rdx
vni, md, flags, udp_sum);
if (err < 0) {
479a: 4c 89 f7 mov %r14,%rdi
479d: 83 f0 01 xor $0x1,%eax
dst_release(ndst);
return;
}
udp_tunnel6_xmit_skb(ndst, sk, skb, dev,
47a0: 89 74 24 18 mov %esi,0x18(%rsp)
47a4: 48 8b 74 24 30 mov 0x30(%rsp),%rsi
47a9: 0f b6 c0 movzbl %al,%eax
47ac: 89 4c 24 20 mov %ecx,0x20(%rsp)
47b0: 48 89 d9 mov %rbx,%rcx
47b3: 89 44 24 28 mov %eax,0x28(%rsp)
47b7: 8b 84 24 80 00 00 00 mov 0x80(%rsp),%eax
47be: 89 44 24 10 mov %eax,0x10(%rsp)
47c2: 0f b6 84 24 87 00 00 movzbl 0x87(%rsp),%eax
47c9: 00
47ca: 89 44 24 08 mov %eax,0x8(%rsp)
47ce: e8 00 00 00 00 callq 47d3 <vxlan_xmit_one+0x5b3>
47d3: e9 3f fb ff ff jmpq 4317 <vxlan_xmit_one+0xf7>
47d8: 48 8b b3 70 08 00 00 mov 0x870(%rbx),%rsi
47df: 4c 89 d2 mov %r10,%rdx
47e2: 4c 89 ef mov %r13,%rdi
47e5: e8 56 f2 ff ff callq 3a40 <vxlan_encap_bypass.isra.47>
47ea: e9 28 fb ff ff jmpq 4317 <vxlan_xmit_one+0xf7>
47ef: 4c 89 ef mov %r13,%rdi
47f2: 4c 89 5c 24 40 mov %r11,0x40(%rsp)
47f7: 4c 89 54 24 48 mov %r10,0x48(%rsp)
47fc: e8 00 00 00 00 callq 4801 <vxlan_xmit_one+0x5e1>
4801: 4c 8b 5c 24 40 mov 0x40(%rsp),%r11
4806: 4c 8b 54 24 48 mov 0x48(%rsp),%r10
}
if (vxlan_addr_any(dst)) {
if (did_rsc) {
/* short-circuited back to local bridge */
vxlan_encap_bypass(skb, vxlan, vxlan);
480b: e9 d3 fb ff ff jmpq 43e3 <vxlan_xmit_one+0x1c3>
4810: 48 8d 94 24 9c 00 00 lea 0x9c(%rsp),%rdx
4817: 00
4818: 48 8d b4 24 98 00 00 lea 0x98(%rsp),%rsi
481f: 00
}
static inline __u32 skb_get_hash(struct sk_buff *skb)
{
if (!skb->l4_hash && !skb->sw_hash)
__skb_get_hash(skb);
4820: 4c 89 5c 24 40 mov %r11,0x40(%rsp)
4825: 4c 89 54 24 48 mov %r10,0x48(%rsp)
482a: e8 00 00 00 00 callq 482f <vxlan_xmit_one+0x60f>
482f: 4c 8b 5c 24 40 mov 0x40(%rsp),%r11
4834: 4c 8b 54 24 48 mov 0x48(%rsp),%r10
4839: e9 97 fb ff ff jmpq 43d5 <vxlan_xmit_one+0x1b5>
483e: 41 0f b7 85 c0 00 00 movzwl 0xc0(%r13),%eax
4845: 00
{
u32 hash;
if (min >= max) {
/* Use default range */
inet_get_local_port_range(net, &min, &max);
4846: 66 83 f8 08 cmp $0x8,%ax
484a: 0f 84 4f 03 00 00 je 4b9f <vxlan_xmit_one+0x97f>
4850: 66 3d 86 dd cmp $0xdd86,%ax
4854: c6 84 24 88 00 00 00 movb $0x0,0x88(%rsp)
485b: 00
485c: 0f 85 3b fb ff ff jne 439d <vxlan_xmit_one+0x17d>
4862: 48 8b 44 24 38 mov 0x38(%rsp),%rax
4867: 0f b7 00 movzwl (%rax),%eax
486a: 66 c1 c0 08 rol $0x8,%ax
486e: 66 c1 e8 04 shr $0x4,%ax
4872: 66 89 84 24 88 00 00 mov %ax,0x88(%rsp)
4879: 00
/* Extract dsfield from inner protocol */
static inline u8 ip_tunnel_get_dsfield(const struct iphdr *iph,
const struct sk_buff *skb)
{
if (skb->protocol == htons(ETH_P_IP))
487a: e9 1e fb ff ff jmpq 439d <vxlan_xmit_one+0x17d>
487f: 48 8b 83 60 08 00 00 mov 0x860(%rbx),%rax
return iph->tos;
else if (skb->protocol == htons(ETH_P_IPV6))
return ipv6_get_dsfield((const struct ipv6hdr *)iph);
else
return 0;
4886: 48 85 c0 test %rax,%rax
4889: 0f 84 78 fa ff ff je 4307 <vxlan_xmit_one+0xe7>
static inline u8 ip_tunnel_get_dsfield(const struct iphdr *iph,
const struct sk_buff *skb)
{
if (skb->protocol == htons(ETH_P_IP))
return iph->tos;
else if (skb->protocol == htons(ETH_P_IPV6))
488f: 48 8b 40 10 mov 0x10(%rax),%rax
}
static inline __u8 ipv6_get_dsfield(const struct ipv6hdr *ipv6h)
{
return ntohs(*(const __be16 *)ipv6h) >> 4;
4893: 4d 85 db test %r11,%r11
4896: 45 8b 47 04 mov 0x4(%r15),%r8d
489a: 0f b6 8c 24 88 00 00 movzbl 0x88(%rsp),%ecx
48a1: 00
48a2: 48 8b 40 20 mov 0x20(%rax),%rax
48a6: 48 89 44 24 78 mov %rax,0x78(%rsp)
48ab: 48 8b 44 24 68 mov 0x68(%rsp),%rax
} else {
md->gbp = skb->mark;
}
if (dst->sa.sa_family == AF_INET) {
if (!vxlan->vn4_sock)
48b0: 4c 8d 48 04 lea 0x4(%rax),%r9
48b4: 0f 84 ab 03 00 00 je 4c65 <vxlan_xmit_one+0xa45>
48ba: 41 8b 53 24 mov 0x24(%r11),%edx
48be: 4c 89 d7 mov %r10,%rdi
goto drop;
sk = vxlan->vn4_sock->sock->sk;
48c1: 4c 89 64 24 08 mov %r12,0x8(%rsp)
rt = vxlan_get_route(vxlan, skb,
48c6: 4c 89 34 24 mov %r14,(%rsp)
48ca: 4c 89 ee mov %r13,%rsi
48cd: e8 2e ed ff ff callq 3600 <vxlan_get_route>
}
if (dst->sa.sa_family == AF_INET) {
if (!vxlan->vn4_sock)
goto drop;
sk = vxlan->vn4_sock->sock->sk;
48d2: 48 3d 00 f0 ff ff cmp $0xfffffffffffff000,%rax
48d8: 49 89 c2 mov %rax,%r10
rt = vxlan_get_route(vxlan, skb,
48db: 0f 87 c0 03 00 00 ja 4ca1 <vxlan_xmit_one+0xa81>
48e1: 48 8b 50 18 mov 0x18(%rax),%rdx
48e5: 48 39 d3 cmp %rdx,%rbx
48e8: 0f 84 7e 03 00 00 je 4c6c <vxlan_xmit_one+0xa4c>
48ee: 4d 85 e4 test %r12,%r12
48f1: 0f 84 87 03 00 00 je 4c7e <vxlan_xmit_one+0xa5e>
48f7: 45 0f b7 74 24 28 movzwl 0x28(%r12),%r14d
48fd: 41 83 e6 01 and $0x1,%r14d
4901: 41 f7 de neg %r14d
rdst ? rdst->remote_ifindex : 0, tos,
dst->sin.sin_addr.s_addr,
&src->sin.sin_addr.s_addr,
dst_cache, info);
if (IS_ERR(rt)) {
4904: 41 83 e6 40 and $0x40,%r14d
if (dst->sa.sa_family == AF_INET) {
if (!vxlan->vn4_sock)
goto drop;
sk = vxlan->vn4_sock->sock->sk;
rt = vxlan_get_route(vxlan, skb,
4908: 41 0f b7 85 c0 00 00 movzwl 0xc0(%r13),%eax
490f: 00
rdst ? rdst->remote_ifindex : 0, tos,
dst->sin.sin_addr.s_addr,
&src->sin.sin_addr.s_addr,
dst_cache, info);
if (IS_ERR(rt)) {
4910: 66 83 f8 08 cmp $0x8,%ax
&dst->sin.sin_addr.s_addr);
dev->stats.tx_carrier_errors++;
goto tx_error;
}
if (rt->dst.dev == dev) {
4914: 0f 84 14 03 00 00 je 4c2e <vxlan_xmit_one+0xa0e>
491a: 0f b6 bc 24 88 00 00 movzbl 0x88(%rsp),%edi
4921: 00
dev->stats.collisions++;
goto rt_tx_error;
}
/* Bypass encapsulation if the destination is local */
if (!info && rt->rt_flags & RTCF_LOCAL &&
4922: 45 31 e4 xor %r12d,%r12d
4925: 83 e7 fc and $0xfffffffc,%edi
return;
}
if (!info)
udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM_TX);
else if (info->key.tun_flags & TUNNEL_DONT_FRAGMENT)
4928: 66 3d 86 dd cmp $0xdd86,%ax
492c: 40 88 bc 24 80 00 00 mov %dil,0x80(%rsp)
4933: 00
union vxlan_addr *src;
struct vxlan_metadata _md;
struct vxlan_metadata *md = &_md;
__be16 src_port = 0, dst_port;
__be32 vni, label;
__be16 df = 0;
4934: 0f 84 b3 01 00 00 je 4aed <vxlan_xmit_one+0x8cd>
493a: 80 bc 24 87 00 00 00 cmpb $0x0,0x87(%rsp)
4941: 00
/* Extract dsfield from inner protocol */
static inline u8 ip_tunnel_get_dsfield(const struct iphdr *iph,
const struct sk_buff *skb)
{
if (skb->protocol == htons(ETH_P_IP))
4942: 75 23 jne 4967 <vxlan_xmit_one+0x747>
4944: 49 8b 42 28 mov 0x28(%r10),%rax
4948: 48 83 e0 fc and $0xfffffffffffffffc,%rax
494c: 8b 40 24 mov 0x24(%rax),%eax
494f: 85 c0 test %eax,%eax
4951: 75 0d jne 4960 <vxlan_xmit_one+0x740>
4953: 48 8b 82 80 04 00 00 mov 0x480(%rdx),%rax
return iph->tos;
else if (skb->protocol == htons(ETH_P_IPV6))
495a: 8b 80 a4 03 00 00 mov 0x3a4(%rax),%eax
4960: 88 84 24 87 00 00 00 mov %al,0x87(%rsp)
4967: 0f b6 44 24 48 movzbl 0x48(%rsp),%eax
udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM_TX);
else if (info->key.tun_flags & TUNNEL_DONT_FRAGMENT)
df = htons(IP_DF);
tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
ttl = ttl ? : ip4_dst_hoplimit(&rt->dst);
496c: 44 8b 4c 24 64 mov 0x64(%rsp),%r9d
4971: 4c 89 d6 mov %r10,%rsi
static inline u32
dst_metric_raw(const struct dst_entry *dst, const int metric)
{
u32 *p = DST_METRICS_PTR(dst);
return p[metric-1];
4974: 4c 8b 44 24 40 mov 0x40(%rsp),%r8
4979: 8b 4c 24 50 mov 0x50(%rsp),%ecx
return skb->skb_iif;
}
static inline int ip4_dst_hoplimit(const struct dst_entry *dst)
{
int hoplimit = dst_metric_raw(dst, RTAX_HOPLIMIT);
497d: ba 14 00 00 00 mov $0x14,%edx
struct net *net = dev_net(dst->dev);
if (hoplimit == 0)
4982: 4c 89 ef mov %r13,%rdi
hoplimit = net->ipv4.sysctl_ip_default_ttl;
4985: 4c 89 94 24 88 00 00 mov %r10,0x88(%rsp)
498c: 00
498d: 89 04 24 mov %eax,(%rsp)
4990: e8 fb f3 ff ff callq 3d90 <vxlan_build_skb>
4995: 85 c0 test %eax,%eax
err = vxlan_build_skb(skb, &rt->dst, sizeof(struct iphdr),
4997: 4c 8b 94 24 88 00 00 mov 0x88(%rsp),%r10
499e: 00
499f: 0f 88 30 01 00 00 js 4ad5 <vxlan_xmit_one+0x8b5>
49a5: 0f b6 44 24 48 movzbl 0x48(%rsp),%eax
49aa: 48 8b 5c 24 68 mov 0x68(%rsp),%rbx
49af: 45 0f b7 f6 movzwl %r14w,%r14d
49b3: 0f b6 74 24 70 movzbl 0x70(%rsp),%esi
49b8: 45 8b 47 04 mov 0x4(%r15),%r8d
49bc: 4c 89 ea mov %r13,%rdx
49bf: 44 0f b6 8c 24 80 00 movzbl 0x80(%rsp),%r9d
49c6: 00 00
vni, md, flags, udp_sum);
if (err < 0)
49c8: 4c 89 d7 mov %r10,%rdi
49cb: 8b 4b 04 mov 0x4(%rbx),%ecx
49ce: 44 89 74 24 08 mov %r14d,0x8(%rsp)
49d3: 83 f0 01 xor $0x1,%eax
goto xmit_tx_error;
udp_tunnel_xmit_skb(rt, sk, skb, src->sin.sin_addr.s_addr,
49d6: 0f b6 c0 movzbl %al,%eax
49d9: 89 74 24 20 mov %esi,0x20(%rsp)
49dd: 0f b7 74 24 62 movzwl 0x62(%rsp),%esi
49e2: 89 44 24 28 mov %eax,0x28(%rsp)
49e6: 0f b7 84 24 84 00 00 movzwl 0x84(%rsp),%eax
49ed: 00
49ee: 45 09 e1 or %r12d,%r9d
49f1: 45 0f b6 c9 movzbl %r9b,%r9d
49f5: 89 74 24 10 mov %esi,0x10(%rsp)
49f9: 48 8b 74 24 78 mov 0x78(%rsp),%rsi
49fe: 89 44 24 18 mov %eax,0x18(%rsp)
4a02: 0f b6 84 24 87 00 00 movzbl 0x87(%rsp),%eax
4a09: 00
4a0a: 89 04 24 mov %eax,(%rsp)
4a0d: e8 00 00 00 00 callq 4a12 <vxlan_xmit_one+0x7f2>
4a12: e9 00 f9 ff ff jmpq 4317 <vxlan_xmit_one+0xf7>
4a17: b8 0a 00 00 00 mov $0xa,%eax
4a1c: 49 8b 54 24 20 mov 0x20(%r12),%rdx
4a21: 66 89 84 24 a0 00 00 mov %ax,0xa0(%rsp)
4a28: 00
4a29: 49 8b 44 24 18 mov 0x18(%r12),%rax
4a2e: 48 89 94 24 b0 00 00 mov %rdx,0xb0(%rsp)
4a35: 00
4a36: 48 89 84 24 a8 00 00 mov %rax,0xa8(%rsp)
4a3d: 00
4a3e: 49 8b 44 24 08 mov 0x8(%r12),%rax
4a43: 49 8b 54 24 10 mov 0x10(%r12),%rdx
dev->name);
goto drop;
}
dst_port = info->key.tp_dst ? : vxlan->cfg.dst_port;
vni = vxlan_tun_id_to_vni(info->key.tun_id);
remote_ip.sa.sa_family = ip_tunnel_info_af(info);
4a48: 48 89 84 24 c8 00 00 mov %rax,0xc8(%rsp)
4a4f: 00
if (remote_ip.sa.sa_family == AF_INET) {
remote_ip.sin.sin_addr.s_addr = info->key.u.ipv4.dst;
local_ip.sin.sin_addr.s_addr = info->key.u.ipv4.src;
} else {
remote_ip.sin6.sin6_addr = info->key.u.ipv6.dst;
4a50: b8 0a 00 00 00 mov $0xa,%eax
dev->name);
goto drop;
}
dst_port = info->key.tp_dst ? : vxlan->cfg.dst_port;
vni = vxlan_tun_id_to_vni(info->key.tun_id);
remote_ip.sa.sa_family = ip_tunnel_info_af(info);
4a55: 48 89 94 24 d0 00 00 mov %rdx,0xd0(%rsp)
4a5c: 00
if (remote_ip.sa.sa_family == AF_INET) {
remote_ip.sin.sin_addr.s_addr = info->key.u.ipv4.dst;
local_ip.sin.sin_addr.s_addr = info->key.u.ipv4.src;
} else {
remote_ip.sin6.sin6_addr = info->key.u.ipv6.dst;
4a5d: e9 5f fc ff ff jmpq 46c1 <vxlan_xmit_one+0x4a1>
4a62: 41 8b 85 b4 00 00 00 mov 0xb4(%r13),%eax
4a69: c6 44 24 48 00 movb $0x0,0x48(%rsp)
local_ip.sin6.sin6_addr = info->key.u.ipv6.src;
4a6e: 89 84 24 94 00 00 00 mov %eax,0x94(%rsp)
4a75: 48 8d 84 24 94 00 00 lea 0x94(%rsp),%rax
4a7c: 00
4a7d: 48 89 44 24 40 mov %rax,0x40(%rsp)
4a82: e9 b0 fa ff ff jmpq 4537 <vxlan_xmit_one+0x317>
4a87: 4c 89 f7 mov %r14,%rdi
4a8a: e8 00 00 00 00 callq 4a8f <vxlan_xmit_one+0x86f>
4a8f: 88 84 24 87 00 00 00 mov %al,0x87(%rsp)
udp_sum = !!(info->key.tun_flags & TUNNEL_CSUM);
if (info->options_len)
md = ip_tunnel_info_opts(info);
} else {
md->gbp = skb->mark;
4a96: e9 9a fc ff ff jmpq 4735 <vxlan_xmit_one+0x515>
__be32 vni, label;
__be16 df = 0;
__u8 tos, ttl;
int err;
u32 flags = vxlan->flags;
bool udp_sum = false;
4a9b: 48 8b 44 24 38 mov 0x38(%rsp),%rax
udp_sum = !!(info->key.tun_flags & TUNNEL_CSUM);
if (info->options_len)
md = ip_tunnel_info_opts(info);
} else {
md->gbp = skb->mark;
4aa0: 44 0f b6 60 01 movzbl 0x1(%rax),%r12d
const struct iphdr *old_iph;
union vxlan_addr *dst;
union vxlan_addr remote_ip, local_ip;
union vxlan_addr *src;
struct vxlan_metadata _md;
struct vxlan_metadata *md = &_md;
4aa5: 44 0f b6 bc 24 88 00 movzbl 0x88(%rsp),%r15d
4aac: 00 00
4aae: 41 83 e4 03 and $0x3,%r12d
4ab2: b8 02 00 00 00 mov $0x2,%eax
if (!info)
udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM6_TX);
tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
ttl = ttl ? : ip6_dst_hoplimit(ndst);
4ab7: 41 83 e7 fc and $0xfffffffc,%r15d
4abb: 41 80 fc 03 cmp $0x3,%r12b
4abf: 44 0f 44 e0 cmove %eax,%r12d
4ac3: e9 5f fc ff ff jmpq 4727 <vxlan_xmit_one+0x507>
4ac8: 4c 89 f7 mov %r14,%rdi
/* Extract dsfield from inner protocol */
static inline u8 ip_tunnel_get_dsfield(const struct iphdr *iph,
const struct sk_buff *skb)
{
if (skb->protocol == htons(ETH_P_IP))
return iph->tos;
4acb: e8 00 00 00 00 callq 4ad0 <vxlan_xmit_one+0x8b0>
4ad0: e9 42 f8 ff ff jmpq 4317 <vxlan_xmit_one+0xf7>
4ad5: 45 31 ed xor %r13d,%r13d
4ad8: 4c 89 d7 mov %r10,%rdi
4adb: e8 00 00 00 00 callq 4ae0 <vxlan_xmit_one+0x8c0>
4ae0: 48 83 83 58 01 00 00 addq $0x1,0x158(%rbx)
4ae7: 01
4ae8: e9 22 f8 ff ff jmpq 430f <vxlan_xmit_one+0xef>
4aed: 48 8b 44 24 38 mov 0x38(%rsp),%rax
4af2: 44 0f b7 20 movzwl (%rax),%r12d
4af6: 66 41 c1 c4 08 rol $0x8,%r12w
skb_scrub_packet(skb, xnet);
err = vxlan_build_skb(skb, ndst, sizeof(struct ipv6hdr),
vni, md, flags, udp_sum);
if (err < 0) {
dst_release(ndst);
4afb: 66 41 c1 ec 04 shr $0x4,%r12w
return;
4b00: 0f b6 84 24 88 00 00 movzbl 0x88(%rsp),%eax
4b07: 00
{
/* dst_release() accepts a NULL parameter.
* We rely on dst being first structure in struct rtable
*/
BUILD_BUG_ON(offsetof(struct rtable, dst) != 0);
dst_release(&rt->dst);
4b08: 41 83 e4 03 and $0x3,%r12d
4b0c: 41 b9 02 00 00 00 mov $0x2,%r9d
/* skb is already freed. */
skb = NULL;
rt_tx_error:
ip_rt_put(rt);
tx_error:
dev->stats.tx_errors++;
4b12: 83 e0 fc and $0xfffffffc,%eax
4b15: 41 80 fc 03 cmp $0x3,%r12b
4b19: 88 84 24 80 00 00 00 mov %al,0x80(%rsp)
4b20: 45 0f 44 e1 cmove %r9d,%r12d
4b24: e9 11 fe ff ff jmpq 493a <vxlan_xmit_one+0x71a>
4b29: 31 d2 xor %edx,%edx
4b2b: e9 6a fa ff ff jmpq 459a <vxlan_xmit_one+0x37a>
4b30: 4d 85 db test %r11,%r11
4b33: 0f 85 24 01 00 00 jne 4c5d <vxlan_xmit_one+0xa3d>
4b39: 80 3d 00 00 00 00 00 cmpb $0x0,0x0(%rip) # 4b40 <vxlan_xmit_one+0x920>
4b40: 0f 85 c1 f7 ff ff jne 4307 <vxlan_xmit_one+0xe7>
4b46: 48 89 d9 mov %rbx,%rcx
4b49: 48 c7 c2 00 00 00 00 mov $0x0,%rdx
4b50: be 9b 07 00 00 mov $0x79b,%esi
4b55: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
if (!vxlan->vn6_sock)
goto drop;
sk = vxlan->vn6_sock->sock->sk;
ndst = vxlan6_get_route(vxlan, skb,
4b5c: c6 05 00 00 00 00 01 movb $0x1,0x0(%rip) # 4b63 <vxlan_xmit_one+0x943>
bool udp_sum = false;
bool xnet = !net_eq(vxlan->net, dev_net(vxlan->dev));
info = skb_tunnel_info(skb);
if (rdst) {
4b63: e8 00 00 00 00 callq 4b68 <vxlan_xmit_one+0x948>
4b68: e9 9a f7 ff ff jmpq 4307 <vxlan_xmit_one+0xe7>
dst = &rdst->remote_ip;
src = &vxlan->cfg.saddr;
dst_cache = &rdst->dst_cache;
} else {
if (!info) {
WARN_ONCE(1, "%s: Missing encapsulation instructions\n",
4b6d: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
4b72: 4c 89 f7 mov %r14,%rdi
4b75: e8 00 00 00 00 callq 4b7a <vxlan_xmit_one+0x95a>
4b7a: 48 83 83 78 01 00 00 addq $0x1,0x178(%rbx)
4b81: 01
4b82: e9 59 ff ff ff jmpq 4ae0 <vxlan_xmit_one+0x8c0>
4b87: 48 8b 44 24 38 mov 0x38(%rsp),%rax
4b8c: 44 0f b7 20 movzwl (%rax),%r12d
4b90: 66 41 c1 c4 08 rol $0x8,%r12w
4b95: 66 41 c1 ec 04 shr $0x4,%r12w
4b9a: e9 06 ff ff ff jmpq 4aa5 <vxlan_xmit_one+0x885>
4b9f: 48 8b 44 24 38 mov 0x38(%rsp),%rax
}
if (ndst->dev == dev) {
netdev_dbg(dev, "circular route to %pI6\n",
&dst->sin6.sin6_addr);
dst_release(ndst);
4ba4: 0f b6 40 01 movzbl 0x1(%rax),%eax
4ba8: 88 84 24 88 00 00 00 mov %al,0x88(%rsp)
dev->stats.collisions++;
4baf: e9 e9 f7 ff ff jmpq 439d <vxlan_xmit_one+0x17d>
goto tx_error;
4bb4: 4c 89 f7 mov %r14,%rdi
4bb7: e8 00 00 00 00 callq 4bbc <vxlan_xmit_one+0x99c>
4bbc: 0f b7 8c 24 84 00 00 movzwl 0x84(%rsp),%ecx
4bc3: 00
4bc4: 41 0f b7 17 movzwl (%r15),%edx
4bc8: 48 8b bb 78 08 00 00 mov 0x878(%rbx),%rdi
4bcf: 44 8b 83 d8 08 00 00 mov 0x8d8(%rbx),%r8d
4bd6: 8b 74 24 50 mov 0x50(%rsp),%esi
4bda: e8 91 b5 ff ff callq 170 <vxlan_find_vni>
4bdf: 48 85 c0 test %rax,%rax
4be2: 0f 84 f8 fe ff ff je 4ae0 <vxlan_xmit_one+0x8c0>
rt6i_flags = ((struct rt6_info *)ndst)->rt6i_flags;
if (!info && rt6i_flags & RTF_LOCAL &&
!(rt6i_flags & (RTCF_BROADCAST | RTCF_MULTICAST))) {
struct vxlan_dev *dst_vxlan;
dst_release(ndst);
4be8: 48 8b b3 70 08 00 00 mov 0x870(%rbx),%rsi
dst_vxlan = vxlan_find_vni(vxlan->net, vni,
4bef: 48 89 c2 mov %rax,%rdx
4bf2: 4c 89 ef mov %r13,%rdi
4bf5: e8 46 ee ff ff callq 3a40 <vxlan_encap_bypass.isra.47>
4bfa: e9 18 f7 ff ff jmpq 4317 <vxlan_xmit_one+0xf7>
4bff: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
4c04: 48 83 83 b8 01 00 00 addq $0x1,0x1b8(%rbx)
4c0b: 01
4c0c: e9 cf fe ff ff jmpq 4ae0 <vxlan_xmit_one+0x8c0>
dst->sa.sa_family, dst_port,
vxlan->flags);
if (!dst_vxlan)
4c11: 48 8b 4c 24 68 mov 0x68(%rsp),%rcx
4c16: 48 c7 c2 00 00 00 00 mov $0x0,%rdx
goto tx_error;
vxlan_encap_bypass(skb, vxlan, dst_vxlan);
4c1d: 48 89 de mov %rbx,%rsi
4c20: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
4c27: e8 00 00 00 00 callq 4c2c <vxlan_xmit_one+0xa0c>
return;
4c2c: eb d6 jmp 4c04 <vxlan_xmit_one+0x9e4>
4c2e: 48 8b 44 24 38 mov 0x38(%rsp),%rax
4c33: 44 0f b6 60 01 movzbl 0x1(%rax),%r12d
&src->sin6.sin6_addr,
dst_cache, info);
if (IS_ERR(ndst)) {
netdev_dbg(dev, "no route to %pI6\n",
&dst->sin6.sin6_addr);
dev->stats.tx_carrier_errors++;
4c38: e9 c3 fe ff ff jmpq 4b00 <vxlan_xmit_one+0x8e0>
goto tx_error;
4c3d: 48 8b 4c 24 68 mov 0x68(%rsp),%rcx
rdst ? rdst->remote_ifindex : 0, tos,
label, &dst->sin6.sin6_addr,
&src->sin6.sin6_addr,
dst_cache, info);
if (IS_ERR(ndst)) {
netdev_dbg(dev, "no route to %pI6\n",
4c42: 48 c7 c2 00 00 00 00 mov $0x0,%rdx
4c49: 48 89 de mov %rbx,%rsi
4c4c: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
4c53: e8 00 00 00 00 callq 4c58 <vxlan_xmit_one+0xa38>
4c58: e9 15 ff ff ff jmpq 4b72 <vxlan_xmit_one+0x952>
4c5d: 45 31 e4 xor %r12d,%r12d
4c60: e9 4e f6 ff ff jmpq 42b3 <vxlan_xmit_one+0x93>
4c65: 31 d2 xor %edx,%edx
4c67: e9 52 fc ff ff jmpq 48be <vxlan_xmit_one+0x69e>
4c6c: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
dev->stats.tx_carrier_errors++;
goto tx_error;
}
if (ndst->dev == dev) {
netdev_dbg(dev, "circular route to %pI6\n",
4c71: 48 83 83 78 01 00 00 addq $0x1,0x178(%rbx)
4c78: 01
4c79: e9 5a fe ff ff jmpq 4ad8 <vxlan_xmit_one+0x8b8>
4c7e: 8b 80 a4 00 00 00 mov 0xa4(%rax),%eax
4c84: 85 c0 test %eax,%eax
4c86: 78 71 js 4cf9 <vxlan_xmit_one+0xad9>
4c88: 8b 44 24 64 mov 0x64(%rsp),%eax
4c8c: 45 31 f6 xor %r14d,%r14d
bool udp_sum = false;
bool xnet = !net_eq(vxlan->net, dev_net(vxlan->dev));
info = skb_tunnel_info(skb);
if (rdst) {
4c8f: c1 e8 06 shr $0x6,%eax
4c92: 83 f0 01 xor $0x1,%eax
if (dst->sa.sa_family == AF_INET) {
if (!vxlan->vn4_sock)
goto drop;
sk = vxlan->vn4_sock->sock->sk;
rt = vxlan_get_route(vxlan, skb,
4c95: 83 e0 01 and $0x1,%eax
4c98: 88 44 24 48 mov %al,0x48(%rsp)
4c9c: e9 67 fc ff ff jmpq 4908 <vxlan_xmit_one+0x6e8>
}
if (rt->dst.dev == dev) {
netdev_dbg(dev, "circular route to %pI4\n",
&dst->sin.sin_addr.s_addr);
dev->stats.collisions++;
4ca1: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
4ca6: e9 59 ff ff ff jmpq 4c04 <vxlan_xmit_one+0x9e4>
goto rt_tx_error;
4cab: 49 8d 4f 04 lea 0x4(%r15),%rcx
}
/* Bypass encapsulation if the destination is local */
if (!info && rt->rt_flags & RTCF_LOCAL &&
4caf: 48 c7 c2 00 00 00 00 mov $0x0,%rdx
4cb6: 48 89 de mov %rbx,%rsi
vxlan_encap_bypass(skb, vxlan, dst_vxlan);
return;
}
if (!info)
udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM_TX);
4cb9: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
4cc0: e8 00 00 00 00 callq 4cc5 <vxlan_xmit_one+0xaa5>
4cc5: e9 3a ff ff ff jmpq 4c04 <vxlan_xmit_one+0x9e4>
4cca: 49 8d 4f 04 lea 0x4(%r15),%rcx
4cce: 48 c7 c2 00 00 00 00 mov $0x0,%rdx
4cd5: 48 89 de mov %rbx,%rsi
4cd8: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
rdst ? rdst->remote_ifindex : 0, tos,
dst->sin.sin_addr.s_addr,
&src->sin.sin_addr.s_addr,
dst_cache, info);
if (IS_ERR(rt)) {
netdev_dbg(dev, "no route to %pI4\n",
4cdf: 48 89 84 24 88 00 00 mov %rax,0x88(%rsp)
4ce6: 00
4ce7: e8 00 00 00 00 callq 4cec <vxlan_xmit_one+0xacc>
4cec: 4c 8b 94 24 88 00 00 mov 0x88(%rsp),%r10
4cf3: 00
4cf4: e9 78 ff ff ff jmpq 4c71 <vxlan_xmit_one+0xa51>
4cf9: a9 00 00 00 30 test $0x30000000,%eax
dev->stats.tx_carrier_errors++;
goto tx_error;
}
if (rt->dst.dev == dev) {
netdev_dbg(dev, "circular route to %pI4\n",
4cfe: 4c 89 d7 mov %r10,%rdi
4d01: 75 85 jne 4c88 <vxlan_xmit_one+0xa68>
4d03: e9 af fe ff ff jmpq 4bb7 <vxlan_xmit_one+0x997>
4d08: e8 00 00 00 00 callq 4d0d <vxlan_xmit_one+0xaed>
4d0d: 0f 1f 00 nopl (%rax)
0000000000004d10 <vxlan_fill_metadata_dst>:
4d10: e8 00 00 00 00 callq 4d15 <vxlan_fill_metadata_dst+0x5>
4d15: 55 push %rbp
4d16: 48 89 f9 mov %rdi,%rcx
4d19: 48 89 e5 mov %rsp,%rbp
4d1c: 41 57 push %r15
4d1e: 41 56 push %r14
4d20: 41 55 push %r13
4d22: 41 54 push %r12
4d24: 49 89 f4 mov %rsi,%r12
4d27: 53 push %rbx
4d28: 4c 8d af 40 08 00 00 lea 0x840(%rdi),%r13
4d2f: 48 83 ec 30 sub $0x30,%rsp
dev->stats.collisions++;
goto rt_tx_error;
}
/* Bypass encapsulation if the destination is local */
if (!info && rt->rt_flags & RTCF_LOCAL &&
4d33: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
4d3a: 00 00
ip_rt_put(rt);
tx_error:
dev->stats.tx_errors++;
tx_free:
dev_kfree_skb(skb);
}
4d3c: 48 89 45 d0 mov %rax,-0x30(%rbp)
dst->remote_ifindex);
return __vxlan_change_mtu(dev, lowerdev, dst, new_mtu, true);
}
static int vxlan_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
{
4d40: 31 c0 xor %eax,%eax
4d42: 48 8b 46 58 mov 0x58(%rsi),%rax
4d46: 48 83 e0 fe and $0xfffffffffffffffe,%rax
4d4a: 0f 84 6c 02 00 00 je 4fbc <vxlan_fill_metadata_dst+0x2ac>
4d50: f6 40 61 02 testb $0x2,0x61(%rax)
4d54: 48 8d 98 a0 00 00 00 lea 0xa0(%rax),%rbx
4d5b: 0f 84 3f 02 00 00 je 4fa0 <vxlan_fill_metadata_dst+0x290>
4d61: 0f b7 91 80 09 00 00 movzwl 0x980(%rcx),%edx
4d68: 0f b7 81 7e 09 00 00 movzwl 0x97e(%rcx),%eax
4d6f: 48 8b b9 80 04 00 00 mov 0x480(%rcx),%rdi
static inline struct metadata_dst *skb_metadata_dst(struct sk_buff *skb)
{
struct metadata_dst *md_dst = (struct metadata_dst *) skb_dst(skb);
if (md_dst && md_dst->dst.flags & DST_METADATA)
4d76: 39 c2 cmp %eax,%edx
4d78: 89 45 c8 mov %eax,-0x38(%rbp)
4d7b: 89 55 cc mov %edx,-0x34(%rbp)
4d7e: 0f 8e e3 01 00 00 jle 4f67 <vxlan_fill_metadata_dst+0x257>
{
struct metadata_dst *md_dst = skb_metadata_dst(skb);
struct dst_entry *dst;
if (md_dst)
return &md_dst->u.tun_info;
4d84: 41 f6 84 24 91 00 00 testb $0x30,0x91(%r12)
4d8b: 00 30
static inline struct metadata_dst *skb_metadata_dst(struct sk_buff *skb)
{
struct metadata_dst *md_dst = (struct metadata_dst *) skb_dst(skb);
if (md_dst && md_dst->dst.flags & DST_METADATA)
4d8d: 0f 84 f8 01 00 00 je 4f8b <vxlan_fill_metadata_dst+0x27b>
struct vxlan_dev *vxlan = netdev_priv(dev);
struct ip_tunnel_info *info = skb_tunnel_info(skb);
__be16 sport, dport;
sport = udp_flow_src_port(dev_net(dev), skb, vxlan->cfg.port_min,
4d93: 41 8b 84 24 a4 00 00 mov 0xa4(%r12),%eax
4d9a: 00
4d9b: 85 c0 test %eax,%eax
4d9d: 0f 85 d1 00 00 00 jne 4e74 <vxlan_fill_metadata_dst+0x164>
4da3: 4d 8b 84 24 d8 00 00 mov 0xd8(%r12),%r8
4daa: 00
static inline __be16 udp_flow_src_port(struct net *net, struct sk_buff *skb,
int min, int max, bool use_eth)
{
u32 hash;
if (min >= max) {
4dab: 45 0f b7 8c 24 c0 00 movzwl 0xc0(%r12),%r9d
4db2: 00 00
data, proto, nhoff, hlen, flags);
}
static inline __u32 skb_get_hash(struct sk_buff *skb)
{
if (!skb->l4_hash && !skb->sw_hash)
4db4: 41 0f b6 50 0b movzbl 0xb(%r8),%edx
4db9: 41 0f b6 78 08 movzbl 0x8(%r8),%edi
4dbe: 41 0f b6 40 0a movzbl 0xa(%r8),%eax
__skb_get_hash(skb);
return skb->hash;
4dc3: 41 81 e9 05 41 52 21 sub $0x21524105,%r9d
4dca: 44 01 cf add %r9d,%edi
/* Use default range */
inet_get_local_port_range(net, &min, &max);
}
hash = skb_get_hash(skb);
if (unlikely(!hash)) {
4dcd: c1 e2 18 shl $0x18,%edx
4dd0: 8d 34 3a lea (%rdx,%rdi,1),%esi
if (use_eth) {
/* Can't find a normal hash, caller has indicated an
* Ethernet packet so use that to compute a hash.
*/
hash = jhash(skb->data, 2 * ETH_ALEN,
4dd3: c1 e0 10 shl $0x10,%eax
4dd6: 41 0f b6 78 06 movzbl 0x6(%r8),%edi
{
u32 a, b, c;
const u8 *k = key;
/* Set up the internal state */
a = b = c = JHASH_INITVAL + length + initval;
4ddb: 8d 14 06 lea (%rsi,%rax,1),%edx
4dde: 41 0f b6 40 07 movzbl 0x7(%r8),%eax
4de3: 41 0f b6 70 04 movzbl 0x4(%r8),%esi
4de8: c1 e7 10 shl $0x10,%edi
4deb: 44 01 ce add %r9d,%esi
4dee: c1 e0 18 shl $0x18,%eax
4df1: 01 f0 add %esi,%eax
4df3: 41 0f b6 30 movzbl (%r8),%esi
4df7: 01 f8 add %edi,%eax
4df9: 41 0f b6 78 05 movzbl 0x5(%r8),%edi
4dfe: 44 01 ce add %r9d,%esi
4e01: 45 0f b6 48 03 movzbl 0x3(%r8),%r9d
4e06: c1 e7 08 shl $0x8,%edi
4e09: 01 c7 add %eax,%edi
4e0b: 44 89 c8 mov %r9d,%eax
4e0e: c1 e0 18 shl $0x18,%eax
4e11: 44 8d 0c 06 lea (%rsi,%rax,1),%r9d
4e15: 41 0f b6 70 02 movzbl 0x2(%r8),%esi
4e1a: 41 0f b6 40 09 movzbl 0x9(%r8),%eax
4e1f: c1 e6 10 shl $0x10,%esi
4e22: c1 e0 08 shl $0x8,%eax
case 6: b += (u32)k[5]<<8;
case 5: b += k[4];
case 4: a += (u32)k[3]<<24;
case 3: a += (u32)k[2]<<16;
case 2: a += (u32)k[1]<<8;
case 1: a += k[0];
4e25: 41 01 f1 add %esi,%r9d
4e28: 41 0f b6 70 01 movzbl 0x1(%r8),%esi
case 10: c += (u32)k[9]<<8;
case 9: c += k[8];
case 8: b += (u32)k[7]<<24;
case 7: b += (u32)k[6]<<16;
case 6: b += (u32)k[5]<<8;
case 5: b += k[4];
4e2d: 01 d0 add %edx,%eax
case 4: a += (u32)k[3]<<24;
case 3: a += (u32)k[2]<<16;
case 2: a += (u32)k[1]<<8;
case 1: a += k[0];
4e2f: 89 fa mov %edi,%edx
4e31: 31 f8 xor %edi,%eax
4e33: c1 c2 0e rol $0xe,%edx
case 10: c += (u32)k[9]<<8;
case 9: c += k[8];
case 8: b += (u32)k[7]<<24;
case 7: b += (u32)k[6]<<16;
case 6: b += (u32)k[5]<<8;
case 5: b += k[4];
4e36: 29 d0 sub %edx,%eax
4e38: c1 e6 08 shl $0x8,%esi
case 4: a += (u32)k[3]<<24;
case 3: a += (u32)k[2]<<16;
case 2: a += (u32)k[1]<<8;
case 1: a += k[0];
4e3b: 44 01 ce add %r9d,%esi
4e3e: 89 f2 mov %esi,%edx
4e40: 89 c6 mov %eax,%esi
4e42: 31 c2 xor %eax,%edx
4e44: c1 c6 0b rol $0xb,%esi
4e47: 29 f2 sub %esi,%edx
4e49: 31 d7 xor %edx,%edi
__jhash_final(a, b, c);
4e4b: 89 fe mov %edi,%esi
4e4d: 89 d7 mov %edx,%edi
case 6: b += (u32)k[5]<<8;
case 5: b += k[4];
case 4: a += (u32)k[3]<<24;
case 3: a += (u32)k[2]<<16;
case 2: a += (u32)k[1]<<8;
case 1: a += k[0];
4e4f: c1 cf 07 ror $0x7,%edi
__jhash_final(a, b, c);
4e52: 29 fe sub %edi,%esi
4e54: 89 f7 mov %esi,%edi
case 6: b += (u32)k[5]<<8;
case 5: b += k[4];
case 4: a += (u32)k[3]<<24;
case 3: a += (u32)k[2]<<16;
case 2: a += (u32)k[1]<<8;
case 1: a += k[0];
4e56: 31 f0 xor %esi,%eax
4e58: c1 c7 10 rol $0x10,%edi
4e5b: 29 f8 sub %edi,%eax
__jhash_final(a, b, c);
4e5d: 89 c7 mov %eax,%edi
4e5f: 31 c2 xor %eax,%edx
4e61: c1 c7 04 rol $0x4,%edi
4e64: 29 fa sub %edi,%edx
4e66: 31 d6 xor %edx,%esi
case 6: b += (u32)k[5]<<8;
case 5: b += k[4];
case 4: a += (u32)k[3]<<24;
case 3: a += (u32)k[2]<<16;
case 2: a += (u32)k[1]<<8;
case 1: a += k[0];
4e68: c1 c2 0e rol $0xe,%edx
4e6b: 29 d6 sub %edx,%esi
4e6d: 31 f0 xor %esi,%eax
__jhash_final(a, b, c);
4e6f: c1 ce 08 ror $0x8,%esi
4e72: 29 f0 sub %esi,%eax
4e74: 8b 75 c8 mov -0x38(%rbp),%esi
4e77: 44 8b 75 cc mov -0x34(%rbp),%r14d
4e7b: 89 c2 mov %eax,%edx
4e7d: c1 e2 10 shl $0x10,%edx
4e80: 44 0f b7 7b 32 movzwl 0x32(%rbx),%r15d
4e85: 31 d0 xor %edx,%eax
4e87: 41 29 f6 sub %esi,%r14d
4e8a: 4d 63 f6 movslq %r14d,%r14
4e8d: 49 0f af c6 imul %r14,%rax
4e91: 48 c1 e8 20 shr $0x20,%rax
4e95: 44 8d 34 30 lea (%rax,%rsi,1),%r14d
4e99: 66 41 c1 c6 08 rol $0x8,%r14w
4e9e: 66 45 85 ff test %r15w,%r15w
4ea2: 75 08 jne 4eac <vxlan_fill_metadata_dst+0x19c>
* attacker is leaked. Only upper 16 bits are relevant in the
* computation for 16 bit port value.
*/
hash ^= hash << 16;
return htons((((u64) hash * (max - min)) >> 32) + min);
4ea4: 44 0f b7 b9 7c 09 00 movzwl 0x97c(%rcx),%r15d
4eab: 00
/* Since this is being sent on the wire obfuscate hash a bit
* to minimize possbility that any useful information to an
* attacker is leaked. Only upper 16 bits are relevant in the
* computation for 16 bit port value.
*/
hash ^= hash << 16;
4eac: f6 43 49 02 testb $0x2,0x49(%rbx)
vxlan->cfg.port_max, true);
dport = info->key.tp_dst ? : vxlan->cfg.dst_port;
4eb0: 75 3e jne 4ef0 <vxlan_fill_metadata_dst+0x1e0>
4eb2: 48 83 b9 60 08 00 00 cmpq $0x0,0x860(%rcx)
4eb9: 00
return htons((((u64) hash * (max - min)) >> 32) + min);
4eba: 0f 84 03 01 00 00 je 4fc3 <vxlan_fill_metadata_dst+0x2b3>
4ec0: 0f b6 4b 2a movzbl 0x2a(%rbx),%ecx
4ec4: 44 8b 43 0c mov 0xc(%rbx),%r8d
4ec8: 4c 8d 4b 08 lea 0x8(%rbx),%r9
4ecc: 31 d2 xor %edx,%edx
4ece: 48 89 5c 24 08 mov %rbx,0x8(%rsp)
4ed3: 48 c7 04 24 00 00 00 movq $0x0,(%rsp)
4eda: 00
4edb: 4c 89 e6 mov %r12,%rsi
}
static inline unsigned short ip_tunnel_info_af(const struct ip_tunnel_info
*tun_info)
{
return tun_info->mode & IP_TUNNEL_INFO_IPV6 ? AF_INET6 : AF_INET;
4ede: 4c 89 ef mov %r13,%rdi
4ee1: e8 1a e7 ff ff callq 3600 <vxlan_get_route>
if (ip_tunnel_info_af(info) == AF_INET) {
struct rtable *rt;
if (!vxlan->vn4_sock)
4ee6: 48 3d 00 f0 ff ff cmp $0xfffffffffffff000,%rax
4eec: 76 47 jbe 4f35 <vxlan_fill_metadata_dst+0x225>
4eee: eb 59 jmp 4f49 <vxlan_fill_metadata_dst+0x239>
return -EINVAL;
rt = vxlan_get_route(vxlan, skb, 0, info->key.tos,
4ef0: 48 83 b9 68 08 00 00 cmpq $0x0,0x868(%rcx)
4ef7: 00
4ef8: 0f 84 c5 00 00 00 je 4fc3 <vxlan_fill_metadata_dst+0x2b3>
4efe: 0f b6 4b 2a movzbl 0x2a(%rbx),%ecx
4f02: 44 8b 43 2c mov 0x2c(%rbx),%r8d
4f06: 48 8d 43 08 lea 0x8(%rbx),%rax
4f0a: 4c 8d 4b 18 lea 0x18(%rbx),%r9
4f0e: 31 d2 xor %edx,%edx
4f10: 48 89 5c 24 10 mov %rbx,0x10(%rsp)
4f15: 48 c7 44 24 08 00 00 movq $0x0,0x8(%rsp)
4f1c: 00 00
info->key.u.ipv4.dst,
&info->key.u.ipv4.src, NULL, info);
if (IS_ERR(rt))
4f1e: 48 89 04 24 mov %rax,(%rsp)
ip_rt_put(rt);
} else {
#if IS_ENABLED(CONFIG_IPV6)
struct dst_entry *ndst;
if (!vxlan->vn6_sock)
4f22: 4c 89 e6 mov %r12,%rsi
4f25: 4c 89 ef mov %r13,%rdi
4f28: e8 43 bb ff ff callq a70 <vxlan6_get_route>
4f2d: 48 3d 00 f0 ff ff cmp $0xfffffffffffff000,%rax
return -EINVAL;
ndst = vxlan6_get_route(vxlan, skb, 0, info->key.tos,
4f33: 77 14 ja 4f49 <vxlan_fill_metadata_dst+0x239>
4f35: 48 89 c7 mov %rax,%rdi
4f38: e8 00 00 00 00 callq 4f3d <vxlan_fill_metadata_dst+0x22d>
4f3d: 66 44 89 73 30 mov %r14w,0x30(%rbx)
4f42: 66 44 89 7b 32 mov %r15w,0x32(%rbx)
4f47: 31 c0 xor %eax,%eax
4f49: 48 8b 5d d0 mov -0x30(%rbp),%rbx
4f4d: 65 48 33 1c 25 28 00 xor %gs:0x28,%rbx
4f54: 00 00
4f56: 75 75 jne 4fcd <vxlan_fill_metadata_dst+0x2bd>
4f58: 48 83 c4 30 add $0x30,%rsp
4f5c: 5b pop %rbx
info->key.label, &info->key.u.ipv6.dst,
&info->key.u.ipv6.src, NULL, info);
if (IS_ERR(ndst))
4f5d: 41 5c pop %r12
4f5f: 41 5d pop %r13
4f61: 41 5e pop %r14
4f63: 41 5f pop %r15
return PTR_ERR(ndst);
dst_release(ndst);
4f65: 5d pop %rbp
4f66: c3 retq
4f67: 48 8d 55 cc lea -0x34(%rbp),%rdx
4f6b: 48 8d 75 c8 lea -0x38(%rbp),%rsi
#else /* !CONFIG_IPV6 */
return -EPFNOSUPPORT;
#endif
}
info->key.tp_src = sport;
4f6f: 48 89 4d c0 mov %rcx,-0x40(%rbp)
info->key.tp_dst = dport;
4f73: e8 00 00 00 00 callq 4f78 <vxlan_fill_metadata_dst+0x268>
return 0;
4f78: 41 f6 84 24 91 00 00 testb $0x30,0x91(%r12)
4f7f: 00 30
}
4f81: 48 8b 4d c0 mov -0x40(%rbp),%rcx
4f85: 0f 85 08 fe ff ff jne 4d93 <vxlan_fill_metadata_dst+0x83>
4f8b: 4c 89 e7 mov %r12,%rdi
4f8e: 48 89 4d c0 mov %rcx,-0x40(%rbp)
4f92: e8 00 00 00 00 callq 4f97 <vxlan_fill_metadata_dst+0x287>
{
u32 hash;
if (min >= max) {
/* Use default range */
inet_get_local_port_range(net, &min, &max);
4f97: 48 8b 4d c0 mov -0x40(%rbp),%rcx
4f9b: e9 f3 fd ff ff jmpq 4d93 <vxlan_fill_metadata_dst+0x83>
4fa0: 48 8b 80 90 00 00 00 mov 0x90(%rax),%rax
4fa7: 48 8d 58 1c lea 0x1c(%rax),%rbx
data, proto, nhoff, hlen, flags);
}
static inline __u32 skb_get_hash(struct sk_buff *skb)
{
if (!skb->l4_hash && !skb->sw_hash)
4fab: 48 85 c0 test %rax,%rax
4fae: b8 00 00 00 00 mov $0x0,%eax
4fb3: 48 0f 44 d8 cmove %rax,%rbx
4fb7: e9 a5 fd ff ff jmpq 4d61 <vxlan_fill_metadata_dst+0x51>
__skb_get_hash(skb);
4fbc: 31 db xor %ebx,%ebx
4fbe: e9 9e fd ff ff jmpq 4d61 <vxlan_fill_metadata_dst+0x51>
4fc3: b8 ea ff ff ff mov $0xffffffea,%eax
4fc8: e9 7c ff ff ff jmpq 4f49 <vxlan_fill_metadata_dst+0x239>
4fcd: e8 00 00 00 00 callq 4fd2 <vxlan_fill_metadata_dst+0x2c2>
if (md_dst)
return &md_dst->u.tun_info;
dst = skb_dst(skb);
if (dst && dst->lwtstate)
4fd2: 0f 1f 40 00 nopl 0x0(%rax)
4fd6: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
4fdd: 00 00 00
0000000000004fe0 <vxlan_gro_receive>:
info->options_len = len;
}
static inline struct ip_tunnel_info *lwt_tun_info(struct lwtunnel_state *lwtstate)
{
return (struct ip_tunnel_info *)lwtstate->data;
4fe0: e8 00 00 00 00 callq 4fe5 <vxlan_gro_receive+0x5>
4fe5: 55 push %rbp
4fe6: 49 89 f0 mov %rsi,%r8
4fe9: 48 89 e5 mov %rsp,%rbp
return lwt_tun_info(dst->lwtstate);
return NULL;
4fec: 41 57 push %r15
4fee: 41 56 push %r14
4ff0: 41 55 push %r13
4ff2: 41 54 push %r12
if (ip_tunnel_info_af(info) == AF_INET) {
struct rtable *rt;
if (!vxlan->vn4_sock)
return -EINVAL;
4ff4: 53 push %rbx
4ff5: 48 89 d3 mov %rdx,%rbx
4ff8: 48 83 ec 20 sub $0x20,%rsp
4ffc: 8b 4a 34 mov 0x34(%rdx),%ecx
#endif
}
info->key.tp_src = sport;
info->key.tp_dst = dport;
return 0;
}
4fff: 4c 8b b7 48 02 00 00 mov 0x248(%rdi),%r14
5006: 44 8d 61 08 lea 0x8(%rcx),%r12d
500a: 44 39 62 30 cmp %r12d,0x30(%rdx)
500e: 49 89 cd mov %rcx,%r13
}
static struct sk_buff **vxlan_gro_receive(struct sock *sk,
struct sk_buff **head,
struct sk_buff *skb)
{
5011: 73 50 jae 5063 <vxlan_gro_receive+0x83>
5013: 8b 92 80 00 00 00 mov 0x80(%rdx),%edx
5019: 89 d0 mov %edx,%eax
501b: 2b 83 84 00 00 00 sub 0x84(%rbx),%eax
5021: 41 39 c4 cmp %eax,%r12d
5024: 0f 87 ab 02 00 00 ja 52d5 <vxlan_gro_receive+0x2f5>
502a: 49 89 cf mov %rcx,%r15
int dev_restart(struct net_device *dev);
int skb_gro_receive(struct sk_buff **head, struct sk_buff *skb);
static inline unsigned int skb_gro_offset(const struct sk_buff *skb)
{
return NAPI_GRO_CB(skb)->data_offset;
502d: 4c 03 bb d8 00 00 00 add 0xd8(%rbx),%r15
5034: 48 c7 43 28 00 00 00 movq $0x0,0x28(%rbx)
503b: 00
skb_gro_remcsum_init(&grc);
off_vx = skb_gro_offset(skb);
hlen = off_vx + sizeof(*vh);
vh = skb_gro_header_fast(skb, off_vx);
if (skb_gro_header_hard(skb, hlen)) {
503c: c7 43 30 00 00 00 00 movl $0x0,0x30(%rbx)
5043: 75 25 jne 506a <vxlan_gro_receive+0x8a>
5045: ba 01 00 00 00 mov $0x1,%edx
return skb->data_len;
}
static inline unsigned int skb_headlen(const struct sk_buff *skb)
{
return skb->len - skb->data_len;
504a: 45 31 ff xor %r15d,%r15d
504d: 66 09 53 38 or %dx,0x38(%rbx)
return unlikely(len > skb->len) ? NULL : __pskb_pull(skb, len);
}
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
5051: 48 83 c4 20 add $0x20,%rsp
5055: 4c 89 f8 mov %r15,%rax
5058: 5b pop %rbx
5059: 41 5c pop %r12
vh = skb_gro_header_slow(skb, hlen, off_vx);
if (unlikely(!vh))
505b: 41 5d pop %r13
505d: 41 5e pop %r14
505f: 41 5f pop %r15
5061: 5d pop %rbp
5062: c3 retq
5063: 49 89 cf mov %rcx,%r15
unsigned int offset)
{
if (!pskb_may_pull(skb, hlen))
return NULL;
NAPI_GRO_CB(skb)->frag0 = NULL;
5066: 4c 03 7a 28 add 0x28(%rdx),%r15
506a: f6 43 4a 04 testb $0x4,0x4a(%rbx)
NAPI_GRO_CB(skb)->frag0_len = 0;
506e: 74 2c je 509c <vxlan_gro_receive+0xbc>
5070: 31 d2 xor %edx,%edx
5072: be 08 00 00 00 mov $0x8,%esi
static struct sk_buff **vxlan_gro_receive(struct sock *sk,
struct sk_buff **head,
struct sk_buff *skb)
{
struct sk_buff *p, **pp = NULL;
5077: 4c 89 ff mov %r15,%rdi
507a: 4c 89 45 c8 mov %r8,-0x38(%rbp)
pp = eth_gro_receive(head, skb);
flush = 0;
out:
skb_gro_remcsum_cleanup(skb, &grc);
NAPI_GRO_CB(skb)->flush |= flush;
507e: 48 89 4d d0 mov %rcx,-0x30(%rbp)
return pp;
}
5082: e8 00 00 00 00 callq 5087 <vxlan_gro_receive+0xa7>
5087: 4c 8b 45 c8 mov -0x38(%rbp),%r8
508b: 48 8b 4d d0 mov -0x30(%rbp),%rcx
508f: f7 d0 not %eax
5091: 8b 53 4c mov 0x4c(%rbx),%edx
}
static inline void *skb_gro_header_fast(struct sk_buff *skb,
unsigned int offset)
{
return NAPI_GRO_CB(skb)->frag0 + offset;
5094: 01 c2 add %eax,%edx
5096: 83 d2 00 adc $0x0,%edx
5099: 89 53 4c mov %edx,0x4c(%rbx)
}
static inline void skb_gro_postpull_rcsum(struct sk_buff *skb,
const void *start, unsigned int len)
{
if (NAPI_GRO_CB(skb)->csum_valid)
509c: 41 f7 07 00 20 00 00 testl $0x2000,(%r15)
NAPI_GRO_CB(skb)->csum = csum_sub(NAPI_GRO_CB(skb)->csum,
50a3: 0f 85 a5 00 00 00 jne 514e <vxlan_gro_receive+0x16e>
50a9: 41 bd 02 00 00 00 mov $0x2,%r13d
50af: 45 31 f6 xor %r14d,%r14d
50b2: 45 31 e4 xor %r12d,%r12d
50b5: 83 43 34 08 addl $0x8,0x34(%rbx)
50b9: 49 8b 00 mov (%r8),%rax
50bc: 48 85 c0 test %rax,%rax
csum_ipv6_magic(const struct in6_addr *saddr, const struct in6_addr *daddr,
__u32 len, __u8 proto, __wsum sum);
static inline unsigned add32_with_carry(unsigned a, unsigned b)
{
asm("addl %2,%0\n\t"
50bf: 75 0e jne 50cf <vxlan_gro_receive+0xef>
50c1: eb 34 jmp 50f7 <vxlan_gro_receive+0x117>
50c3: 80 60 4a fe andb $0xfe,0x4a(%rax)
50c7: 48 8b 00 mov (%rax),%rax
50ca: 48 85 c0 test %rax,%rax
skb_gro_postpull_rcsum(skb, vh, sizeof(struct vxlanhdr));
flags = vh->vx_flags;
if ((flags & VXLAN_HF_RCO) && (vs->flags & VXLAN_F_REMCSUM_RX)) {
50cd: 74 28 je 50f7 <vxlan_gro_receive+0x117>
50cf: f6 40 4a 01 testb $0x1,0x4a(%rax)
50d3: 74 f2 je 50c7 <vxlan_gro_receive+0xe7>
50d5: 48 89 ca mov %rcx,%rdx
50d8: 48 03 90 d8 00 00 00 add 0xd8(%rax),%rdx
};
static inline void skb_gro_remcsum_init(struct gro_remcsum *grc)
{
grc->offset = 0;
grc->delta = 0;
50df: 8b 3a mov (%rdx),%edi
50e1: 41 39 3f cmp %edi,(%r15)
__wsum delta;
};
static inline void skb_gro_remcsum_init(struct gro_remcsum *grc)
{
grc->offset = 0;
50e4: 75 dd jne 50c3 <vxlan_gro_receive+0xe3>
return skb->len - NAPI_GRO_CB(skb)->data_offset;
}
static inline void skb_gro_pull(struct sk_buff *skb, unsigned int len)
{
NAPI_GRO_CB(skb)->data_offset += len;
50e6: 8b 72 04 mov 0x4(%rdx),%esi
goto out;
}
skb_gro_pull(skb, sizeof(struct vxlanhdr)); /* pull vxlan header */
for (p = *head; p; p = p->next) {
50e9: 41 39 77 04 cmp %esi,0x4(%r15)
50ed: 75 d4 jne 50c3 <vxlan_gro_receive+0xe3>
50ef: 48 8b 00 mov (%rax),%rax
50f2: 48 85 c0 test %rax,%rax
continue;
vh2 = (struct vxlanhdr *)(p->data + off_vx);
if (vh->vx_flags != vh2->vx_flags ||
vh->vx_vni != vh2->vx_vni) {
NAPI_GRO_CB(p)->same_flow = 0;
50f5: 75 d8 jne 50cf <vxlan_gro_receive+0xef>
goto out;
}
skb_gro_pull(skb, sizeof(struct vxlanhdr)); /* pull vxlan header */
for (p = *head; p; p = p->next) {
50f7: 48 89 de mov %rbx,%rsi
50fa: 4c 89 c7 mov %r8,%rdi
50fd: e8 00 00 00 00 callq 5102 <vxlan_gro_receive+0x122>
if (!NAPI_GRO_CB(p)->same_flow)
5102: 31 d2 xor %edx,%edx
5104: 49 89 c7 mov %rax,%r15
continue;
vh2 = (struct vxlanhdr *)(p->data + off_vx);
5107: 45 85 f6 test %r14d,%r14d
510a: 0f 84 3d ff ff ff je 504d <vxlan_gro_receive+0x6d>
if (vh->vx_flags != vh2->vx_flags ||
5110: 44 89 e1 mov %r12d,%ecx
5113: 41 83 c4 02 add $0x2,%r12d
5117: 44 3b 63 30 cmp 0x30(%rbx),%r12d
511b: 0f 87 7c 01 00 00 ja 529d <vxlan_gro_receive+0x2bd>
goto out;
}
skb_gro_pull(skb, sizeof(struct vxlanhdr)); /* pull vxlan header */
for (p = *head; p; p = p->next) {
5121: 48 03 4b 28 add 0x28(%rbx),%rcx
5125: 0f b7 31 movzwl (%rcx),%esi
NAPI_GRO_CB(p)->same_flow = 0;
continue;
}
}
pp = eth_gro_receive(head, skb);
5128: 44 89 f0 mov %r14d,%eax
512b: f7 d6 not %esi
512d: 01 f0 add %esi,%eax
512f: 83 d0 00 adc $0x0,%eax
5132: 89 c6 mov %eax,%esi
5134: 66 31 c0 xor %ax,%ax
struct gro_remcsum *grc)
{
void *ptr;
size_t plen = grc->offset + sizeof(u16);
if (!grc->delta)
5137: c1 e6 10 shl $0x10,%esi
513a: 01 f0 add %esi,%eax
513c: 15 ff ff 00 00 adc $0xffff,%eax
}
static inline void *skb_gro_header_fast(struct sk_buff *skb,
unsigned int offset)
{
return NAPI_GRO_CB(skb)->frag0 + offset;
5141: f7 d0 not %eax
if (!grc->delta)
return;
ptr = skb_gro_header_fast(skb, grc->offset);
if (skb_gro_header_hard(skb, grc->offset + sizeof(u16))) {
5143: c1 e8 10 shr $0x10,%eax
5146: 66 89 01 mov %ax,(%rcx)
5149: e9 ff fe ff ff jmpq 504d <vxlan_gro_receive+0x6d>
514e: 41 8b 86 1c 20 00 00 mov 0x201c(%r14),%eax
5155: f6 c4 04 test $0x4,%ah
5158: 0f 84 4b ff ff ff je 50a9 <vxlan_gro_receive+0xc9>
515e: 0f b6 b3 93 00 00 00 movzbl 0x93(%rbx),%esi
* the last step before putting a checksum into a packet.
* Make sure not to mix with 64bit checksums.
*/
static inline __sum16 csum_fold(__wsum sum)
{
asm(" addl %1,%0\n"
5165: 40 f6 c6 10 test $0x10,%sil
5169: 0f 85 9a 01 00 00 jne 5309 <vxlan_gro_receive+0x329>
516f: f6 43 4a 04 testb $0x4,0x4a(%rbx)
return delta;
}
static inline void remcsum_unadjust(__sum16 *psum, __wsum delta)
{
*psum = csum_fold(csum_sub(delta, *psum));
5173: 0f 84 cc fe ff ff je 5045 <vxlan_gro_receive+0x65>
5179: 49 63 57 04 movslq 0x4(%r15),%rdx
517d: 41 89 d6 mov %edx,%r14d
skb_gro_postpull_rcsum(skb, vh, sizeof(struct vxlanhdr));
flags = vh->vx_flags;
if ((flags & VXLAN_HF_RCO) && (vs->flags & VXLAN_F_REMCSUM_RX)) {
5180: 48 c1 fa 3f sar $0x3f,%rdx
5184: 41 81 e6 00 00 00 7f and $0x7f000000,%r14d
518b: 48 83 e2 f6 and $0xfffffffffffffff6,%rdx
struct gro_remcsum *grc,
bool nopartial)
{
size_t start, offset;
if (skb->remcsum_offload)
518f: 41 0f ce bswap %r14d
5192: 48 83 c2 10 add $0x10,%rdx
5196: 45 01 f6 add %r14d,%r14d
5199: f6 c4 10 test $0x10,%ah
519c: 0f 84 75 01 00 00 je 5317 <vxlan_gro_receive+0x337>
return vh;
if (!NAPI_GRO_CB(skb)->csum_valid)
51a2: 45 89 f1 mov %r14d,%r9d
51a5: 49 01 d1 add %rdx,%r9
51a8: 49 63 d6 movslq %r14d,%rdx
skb_gro_postpull_rcsum(skb, vh, sizeof(struct vxlanhdr));
flags = vh->vx_flags;
if ((flags & VXLAN_HF_RCO) && (vs->flags & VXLAN_F_REMCSUM_RX)) {
vh = vxlan_gro_remcsum(skb, off_vx, vh, sizeof(struct vxlanhdr),
51ab: 4d 63 d1 movslq %r9d,%r10
#endif
}
static inline size_t vxlan_rco_start(__be32 vni_field)
{
return be32_to_cpu(vni_field & VXLAN_RCO_MASK) << VXLAN_RCO_SHIFT;
51ae: 49 8d 42 02 lea 0x2(%r10),%rax
}
static inline size_t vxlan_rco_offset(__be32 vni_field)
{
return (vni_field & VXLAN_RCO_UDP) ?
offsetof(struct udphdr, check) :
51b2: 48 39 d0 cmp %rdx,%rax
#endif
}
static inline size_t vxlan_rco_start(__be32 vni_field)
{
return be32_to_cpu(vni_field & VXLAN_RCO_MASK) << VXLAN_RCO_SHIFT;
51b5: 48 0f 42 c2 cmovb %rdx,%rax
51b9: 41 8d 44 05 08 lea 0x8(%r13,%rax,1),%eax
}
static inline size_t vxlan_rco_offset(__be32 vni_field)
{
return (vni_field & VXLAN_RCO_UDP) ?
offsetof(struct udphdr, check) :
51be: 3b 43 30 cmp 0x30(%rbx),%eax
#endif
}
static inline size_t vxlan_rco_start(__be32 vni_field)
{
return be32_to_cpu(vni_field & VXLAN_RCO_MASK) << VXLAN_RCO_SHIFT;
51c1: 0f 86 72 01 00 00 jbe 5339 <vxlan_gro_receive+0x359>
51c7: 8b b3 80 00 00 00 mov 0x80(%rbx),%esi
__wsum delta;
size_t plen = hdrlen + max_t(size_t, offset + sizeof(u16), start);
BUG_ON(!NAPI_GRO_CB(skb)->csum_valid);
if (!nopartial) {
51cd: 89 f2 mov %esi,%edx
51cf: 2b 93 84 00 00 00 sub 0x84(%rbx),%edx
if (!NAPI_GRO_CB(skb)->csum_valid)
return NULL;
start = vxlan_rco_start(vni_field);
offset = start + vxlan_rco_offset(vni_field);
51d5: 39 d0 cmp %edx,%eax
51d7: 0f 87 9a 01 00 00 ja 5377 <vxlan_gro_receive+0x397>
int start, int offset,
struct gro_remcsum *grc,
bool nopartial)
{
__wsum delta;
size_t plen = hdrlen + max_t(size_t, offset + sizeof(u16), start);
51dd: 49 89 cf mov %rcx,%r15
NAPI_GRO_CB(skb)->gro_remcsum_start = off + hdrlen + start;
return ptr;
}
ptr = skb_gro_header_fast(skb, off);
if (skb_gro_header_hard(skb, off + plen)) {
51e0: 4c 03 bb d8 00 00 00 add 0xd8(%rbx),%r15
51e7: 48 c7 43 28 00 00 00 movq $0x0,0x28(%rbx)
51ee: 00
51ef: c7 43 30 00 00 00 00 movl $0x0,0x30(%rbx)
51f6: 0f 84 b4 01 00 00 je 53b0 <vxlan_gro_receive+0x3d0>
51fc: 49 8d 7f 08 lea 0x8(%r15),%rdi
return skb->data_len;
}
static inline unsigned int skb_headlen(const struct sk_buff *skb)
{
return skb->len - skb->data_len;
5200: 31 d2 xor %edx,%edx
5202: 44 89 f6 mov %r14d,%esi
return unlikely(len > skb->len) ? NULL : __pskb_pull(skb, len);
}
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
5205: 4c 89 45 b8 mov %r8,-0x48(%rbp)
5209: 4c 89 4d c0 mov %r9,-0x40(%rbp)
ptr = skb_gro_header_slow(skb, off + plen, off);
if (!ptr)
520d: 49 01 fa add %rdi,%r10
5210: 48 89 4d c8 mov %rcx,-0x38(%rbp)
5214: 44 8b 6b 4c mov 0x4c(%rbx),%r13d
unsigned int offset)
{
if (!pskb_may_pull(skb, hlen))
return NULL;
NAPI_GRO_CB(skb)->frag0 = NULL;
5218: 4c 89 55 d0 mov %r10,-0x30(%rbp)
521c: e8 00 00 00 00 callq 5221 <vxlan_gro_receive+0x241>
NAPI_GRO_CB(skb)->frag0_len = 0;
5221: 4c 8b 55 d0 mov -0x30(%rbp),%r10
5225: 4c 8b 4d c0 mov -0x40(%rbp),%r9
}
ptr = skb_gro_header_fast(skb, off);
if (skb_gro_header_hard(skb, off + plen)) {
ptr = skb_gro_header_slow(skb, off + plen, off);
if (!ptr)
5229: f7 d0 not %eax
522b: 41 01 c5 add %eax,%r13d
return NULL;
}
delta = remcsum_adjust(ptr + hdrlen, NAPI_GRO_CB(skb)->csum,
522e: 41 83 d5 00 adc $0x0,%r13d
{
__sum16 *psum = (__sum16 *)(ptr + offset);
__wsum delta;
/* Subtract out checksum up to start */
csum = csum_sub(csum, csum_partial(ptr, start, 0));
5232: 44 89 e8 mov %r13d,%eax
5235: 66 45 31 ed xor %r13w,%r13w
5239: 48 8b 4d c8 mov -0x38(%rbp),%rcx
}
static inline __wsum remcsum_adjust(void *ptr, __wsum csum,
int start, int offset)
{
__sum16 *psum = (__sum16 *)(ptr + offset);
523d: 41 0f b7 12 movzwl (%r10),%edx
5241: c1 e0 10 shl $0x10,%eax
5244: 45 01 cc add %r9d,%r12d
5247: 41 01 c5 add %eax,%r13d
524a: 41 81 d5 ff ff 00 00 adc $0xffff,%r13d
csum_ipv6_magic(const struct in6_addr *saddr, const struct in6_addr *daddr,
__u32 len, __u8 proto, __wsum sum);
static inline unsigned add32_with_carry(unsigned a, unsigned b)
{
asm("addl %2,%0\n\t"
5251: 41 f7 d5 not %r13d
5254: 4c 8b 45 b8 mov -0x48(%rbp),%r8
start, offset);
/* Adjust skb->csum since we changed the packet */
NAPI_GRO_CB(skb)->csum = csum_add(NAPI_GRO_CB(skb)->csum, delta);
grc->offset = off + hdrlen + offset;
5258: 41 c1 ed 10 shr $0x10,%r13d
525c: 66 45 89 2a mov %r13w,(%r10)
5260: 44 89 e8 mov %r13d,%eax
static inline __sum16 csum_fold(__wsum sum)
{
asm(" addl %1,%0\n"
" adcl $0xffff,%0"
: "=r" (sum)
: "r" ((__force u32)sum << 16),
5263: 4d 63 ec movslq %r12d,%r13
"0" ((__force u32)sum & 0xffff0000));
5266: f7 d2 not %edx
5268: 49 83 c5 02 add $0x2,%r13
vh = vxlan_gro_remcsum(skb, off_vx, vh, sizeof(struct vxlanhdr),
vh->vx_vni, &grc,
!!(vs->flags &
VXLAN_F_REMCSUM_NOPARTIAL));
if (!vh)
526c: 80 8b 93 00 00 00 10 orb $0x10,0x93(%rbx)
static inline __sum16 csum_fold(__wsum sum)
{
asm(" addl %1,%0\n"
" adcl $0xffff,%0"
: "=r" (sum)
: "r" ((__force u32)sum << 16),
5273: 01 d0 add %edx,%eax
5275: 83 d0 00 adc $0x0,%eax
* the last step before putting a checksum into a packet.
* Make sure not to mix with 64bit checksums.
*/
static inline __sum16 csum_fold(__wsum sum)
{
asm(" addl %1,%0\n"
5278: 41 89 c6 mov %eax,%r14d
527b: 8b 43 4c mov 0x4c(%rbx),%eax
527e: 44 01 f0 add %r14d,%eax
csum_ipv6_magic(const struct in6_addr *saddr, const struct in6_addr *daddr,
__u32 len, __u8 proto, __wsum sum);
static inline unsigned add32_with_carry(unsigned a, unsigned b)
{
asm("addl %2,%0\n\t"
5281: 83 d0 00 adc $0x0,%eax
5284: 4d 85 ff test %r15,%r15
5287: 89 43 4c mov %eax,0x4c(%rbx)
528a: 0f 85 25 fe ff ff jne 50b5 <vxlan_gro_receive+0xd5>
5290: ba 01 00 00 00 mov $0x1,%edx
5295: 45 31 ff xor %r15d,%r15d
5298: e9 6a fe ff ff jmpq 5107 <vxlan_gro_receive+0x127>
offset = start + vxlan_rco_offset(vni_field);
vh = skb_gro_remcsum_process(skb, (void *)vh, off, hdrlen,
start, offset, grc, nopartial);
skb->remcsum_offload = 1;
529d: 8b b3 80 00 00 00 mov 0x80(%rbx),%esi
52a3: 89 f0 mov %esi,%eax
52a5: 2b 83 84 00 00 00 sub 0x84(%rbx),%eax
52ab: 41 39 c5 cmp %eax,%r13d
52ae: 0f 87 91 00 00 00 ja 5345 <vxlan_gro_receive+0x365>
vh = vxlan_gro_remcsum(skb, off_vx, vh, sizeof(struct vxlanhdr),
vh->vx_vni, &grc,
!!(vs->flags &
VXLAN_F_REMCSUM_NOPARTIAL));
if (!vh)
52b4: 48 03 8b d8 00 00 00 add 0xd8(%rbx),%rcx
52bb: 48 c7 43 28 00 00 00 movq $0x0,0x28(%rbx)
52c2: 00
52c3: c7 43 30 00 00 00 00 movl $0x0,0x30(%rbx)
static struct sk_buff **vxlan_gro_receive(struct sock *sk,
struct sk_buff **head,
struct sk_buff *skb)
{
struct sk_buff *p, **pp = NULL;
52ca: 0f 85 55 fe ff ff jne 5125 <vxlan_gro_receive+0x145>
52d0: e9 78 fd ff ff jmpq 504d <vxlan_gro_receive+0x6d>
return skb->data_len;
}
static inline unsigned int skb_headlen(const struct sk_buff *skb)
{
return skb->len - skb->data_len;
52d5: 41 39 d4 cmp %edx,%r12d
52d8: 48 89 75 c8 mov %rsi,-0x38(%rbp)
return unlikely(len > skb->len) ? NULL : __pskb_pull(skb, len);
}
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
52dc: 48 89 4d d0 mov %rcx,-0x30(%rbp)
52e0: 0f 87 5f fd ff ff ja 5045 <vxlan_gro_receive+0x65>
return;
ptr = skb_gro_header_fast(skb, grc->offset);
if (skb_gro_header_hard(skb, grc->offset + sizeof(u16))) {
ptr = skb_gro_header_slow(skb, plen, grc->offset);
if (!ptr)
52e6: 44 89 e6 mov %r12d,%esi
52e9: 48 89 df mov %rbx,%rdi
unsigned int offset)
{
if (!pskb_may_pull(skb, hlen))
return NULL;
NAPI_GRO_CB(skb)->frag0 = NULL;
52ec: 29 c6 sub %eax,%esi
52ee: e8 00 00 00 00 callq 52f3 <vxlan_gro_receive+0x313>
NAPI_GRO_CB(skb)->frag0_len = 0;
52f3: 48 85 c0 test %rax,%rax
52f6: 48 8b 4d d0 mov -0x30(%rbp),%rcx
return;
ptr = skb_gro_header_fast(skb, grc->offset);
if (skb_gro_header_hard(skb, grc->offset + sizeof(u16))) {
ptr = skb_gro_header_slow(skb, plen, grc->offset);
if (!ptr)
52fa: 4c 8b 45 c8 mov -0x38(%rbp),%r8
52fe: 0f 84 41 fd ff ff je 5045 <vxlan_gro_receive+0x65>
5304: e9 21 fd ff ff jmpq 502a <vxlan_gro_receive+0x4a>
return 1;
if (unlikely(len > skb->len))
5309: 4d 85 ff test %r15,%r15
530c: 0f 85 97 fd ff ff jne 50a9 <vxlan_gro_receive+0xc9>
5312: e9 2e fd ff ff jmpq 5045 <vxlan_gro_receive+0x65>
return 0;
return __pskb_pull_tail(skb, len - skb_headlen(skb)) != NULL;
5317: 47 8d 6c 35 08 lea 0x8(%r13,%r14,1),%r13d
531c: 83 ce 10 or $0x10,%esi
531f: 4d 85 ff test %r15,%r15
5322: 40 88 b3 93 00 00 00 mov %sil,0x93(%rbx)
}
static inline void *skb_gro_header_slow(struct sk_buff *skb, unsigned int hlen,
unsigned int offset)
{
if (!pskb_may_pull(skb, hlen))
5329: 66 44 89 6b 3e mov %r13w,0x3e(%rbx)
532e: 0f 85 75 fd ff ff jne 50a9 <vxlan_gro_receive+0xc9>
5334: e9 0c fd ff ff jmpq 5045 <vxlan_gro_receive+0x65>
vh = vxlan_gro_remcsum(skb, off_vx, vh, sizeof(struct vxlanhdr),
vh->vx_vni, &grc,
!!(vs->flags &
VXLAN_F_REMCSUM_NOPARTIAL));
if (!vh)
5339: 49 89 cf mov %rcx,%r15
533c: 4c 03 7b 28 add 0x28(%rbx),%r15
5340: e9 b7 fe ff ff jmpq 51fc <vxlan_gro_receive+0x21c>
5345: 44 39 ee cmp %r13d,%esi
size_t plen = hdrlen + max_t(size_t, offset + sizeof(u16), start);
BUG_ON(!NAPI_GRO_CB(skb)->csum_valid);
if (!nopartial) {
NAPI_GRO_CB(skb)->gro_remcsum_start = off + hdrlen + start;
5348: 48 89 4d c8 mov %rcx,-0x38(%rbp)
offset = start + vxlan_rco_offset(vni_field);
vh = skb_gro_remcsum_process(skb, (void *)vh, off, hdrlen,
start, offset, grc, nopartial);
skb->remcsum_offload = 1;
534c: 0f 82 fb fc ff ff jb 504d <vxlan_gro_receive+0x6d>
5352: 44 89 ee mov %r13d,%esi
5355: 48 89 df mov %rbx,%rdi
5358: 89 55 d0 mov %edx,-0x30(%rbp)
535b: 29 c6 sub %eax,%esi
535d: e8 00 00 00 00 callq 5362 <vxlan_gro_receive+0x382>
vh = vxlan_gro_remcsum(skb, off_vx, vh, sizeof(struct vxlanhdr),
vh->vx_vni, &grc,
!!(vs->flags &
VXLAN_F_REMCSUM_NOPARTIAL));
if (!vh)
5362: 48 85 c0 test %rax,%rax
5365: 8b 55 d0 mov -0x30(%rbp),%edx
5368: 48 8b 4d c8 mov -0x38(%rbp),%rcx
}
static inline void *skb_gro_header_fast(struct sk_buff *skb,
unsigned int offset)
{
return NAPI_GRO_CB(skb)->frag0 + offset;
536c: 0f 84 db fc ff ff je 504d <vxlan_gro_receive+0x6d>
5372: e9 3d ff ff ff jmpq 52b4 <vxlan_gro_receive+0x2d4>
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
return 1;
if (unlikely(len > skb->len))
5377: 39 f0 cmp %esi,%eax
5379: 77 35 ja 53b0 <vxlan_gro_receive+0x3d0>
537b: 29 d0 sub %edx,%eax
537d: 48 89 df mov %rbx,%rdi
5380: 4c 89 45 b8 mov %r8,-0x48(%rbp)
return 0;
return __pskb_pull_tail(skb, len - skb_headlen(skb)) != NULL;
5384: 89 c6 mov %eax,%esi
5386: 4c 89 55 c0 mov %r10,-0x40(%rbp)
538a: 4c 89 4d c8 mov %r9,-0x38(%rbp)
538e: 48 89 4d d0 mov %rcx,-0x30(%rbp)
}
static inline void *skb_gro_header_slow(struct sk_buff *skb, unsigned int hlen,
unsigned int offset)
{
if (!pskb_may_pull(skb, hlen))
5392: e8 00 00 00 00 callq 5397 <vxlan_gro_receive+0x3b7>
5397: 48 85 c0 test %rax,%rax
539a: 48 8b 4d d0 mov -0x30(%rbp),%rcx
539e: 4c 8b 4d c8 mov -0x38(%rbp),%r9
53a2: 4c 8b 55 c0 mov -0x40(%rbp),%r10
53a6: 4c 8b 45 b8 mov -0x48(%rbp),%r8
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
return 1;
if (unlikely(len > skb->len))
53aa: 0f 85 2d fe ff ff jne 51dd <vxlan_gro_receive+0x1fd>
return 0;
return __pskb_pull_tail(skb, len - skb_headlen(skb)) != NULL;
53b0: 80 8b 93 00 00 00 10 orb $0x10,0x93(%rbx)
53b7: ba 01 00 00 00 mov $0x1,%edx
53bc: 45 31 ff xor %r15d,%r15d
53bf: e9 89 fc ff ff jmpq 504d <vxlan_gro_receive+0x6d>
53c4: 66 90 xchg %ax,%ax
53c6: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
53cd: 00 00 00
00000000000053d0 <vxlan_rcv>:
53d0: e8 00 00 00 00 callq 53d5 <vxlan_rcv+0x5>
53d5: 55 push %rbp
53d6: 48 89 e5 mov %rsp,%rbp
53d9: 41 57 push %r15
53db: 41 56 push %r14
53dd: 41 55 push %r13
53df: 41 54 push %r12
offset = start + vxlan_rco_offset(vni_field);
vh = skb_gro_remcsum_process(skb, (void *)vh, off, hdrlen,
start, offset, grc, nopartial);
skb->remcsum_offload = 1;
53e1: 49 89 f4 mov %rsi,%r12
53e4: 53 push %rbx
53e5: 48 89 fb mov %rdi,%rbx
53e8: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
static struct sk_buff **vxlan_gro_receive(struct sock *sk,
struct sk_buff **head,
struct sk_buff *skb)
{
struct sk_buff *p, **pp = NULL;
53ec: 48 83 ec 60 sub $0x60,%rsp
53f0: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
53f7: 00 00
53f9: 48 89 44 24 58 mov %rax,0x58(%rsp)
53fe: 31 c0 xor %eax,%eax
return err <= 1;
}
/* Callback from net/ipv4/udp.c to receive packets */
static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
{
5400: 8b 86 80 00 00 00 mov 0x80(%rsi),%eax
5406: 89 c2 mov %eax,%edx
5408: 2b 96 84 00 00 00 sub 0x84(%rsi),%edx
540e: 83 fa 0f cmp $0xf,%edx
5411: 0f 86 b9 02 00 00 jbe 56d0 <vxlan_rcv+0x300>
5417: 41 0f b7 84 24 c2 00 movzwl 0xc2(%r12),%eax
541e: 00 00
5420: 49 03 84 24 d0 00 00 add 0xd0(%r12),%rax
5427: 00
5428: 4c 8b 68 08 mov 0x8(%rax),%r13
542c: 44 8b 70 0c mov 0xc(%rax),%r14d
5430: 41 f6 c5 08 test $0x8,%r13b
5434: 44 89 e9 mov %r13d,%ecx
return skb->data_len;
}
static inline unsigned int skb_headlen(const struct sk_buff *skb)
{
return skb->len - skb->data_len;
5437: 0f 84 80 00 00 00 je 54bd <vxlan_rcv+0xed>
543d: 4c 8b bb 48 02 00 00 mov 0x248(%rbx),%r15
return unlikely(len > skb->len) ? NULL : __pskb_pull(skb, len);
}
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
5444: 4c 89 ee mov %r13,%rsi
return skb->transport_header != (typeof(skb->transport_header))~0U;
}
static inline unsigned char *skb_transport_header(const struct sk_buff *skb)
{
return skb->head + skb->transport_header;
5447: 48 83 e6 f7 and $0xfffffffffffffff7,%rsi
544b: 49 89 f5 mov %rsi,%r13
544e: 4d 85 ff test %r15,%r15
5451: 74 3d je 5490 <vxlan_rcv+0xc0>
5453: 41 8b bf 1c 20 00 00 mov 0x201c(%r15),%edi
/* Need UDP and VXLAN header to be present */
if (!pskb_may_pull(skb, VXLAN_HLEN))
goto drop;
unparsed = *vxlan_hdr(skb);
545a: f7 c7 00 20 00 00 test $0x2000,%edi
/* VNI flag always required to be set */
if (!(unparsed.vx_flags & VXLAN_HF_VNI)) {
5460: 0f 85 83 00 00 00 jne 54e9 <vxlan_rcv+0x119>
5466: 44 89 f0 mov %r14d,%eax
5469: c1 e0 08 shl $0x8,%eax
546c: 69 d0 47 86 c8 61 imul $0x61c88647,%eax,%edx
5472: c1 ea 16 shr $0x16,%edx
ntohl(vxlan_hdr(skb)->vx_flags),
ntohl(vxlan_hdr(skb)->vx_vni));
/* Return non vxlan pkt */
goto drop;
}
unparsed.vx_flags &= ~VXLAN_HF_VNI;
5475: 49 8d 54 d7 10 lea 0x10(%r15,%rdx,8),%rdx
547a: 48 8b 5a 08 mov 0x8(%rdx),%rbx
unparsed.vx_vni &= ~VXLAN_VNI_MASK;
vs = rcu_dereference_sk_user_data(sk);
if (!vs)
547e: 48 85 db test %rbx,%rbx
5481: 74 0d je 5490 <vxlan_rcv+0xc0>
static struct vxlan_dev *vxlan_vs_find_vni(struct vxlan_sock *vs, __be32 vni)
{
struct vxlan_dev *vxlan;
/* For flow based devices, map all packets to VNI 0 */
if (vs->flags & VXLAN_F_COLLECT_METADATA)
5483: 3b 43 60 cmp 0x60(%rbx),%eax
5486: 74 67 je 54ef <vxlan_rcv+0x11f>
5488: 48 8b 1b mov (%rbx),%rbx
548b: 48 85 db test %rbx,%rbx
548e: 75 f3 jne 5483 <vxlan_rcv+0xb3>
5490: 4c 89 e7 mov %r12,%rdi
5493: e8 00 00 00 00 callq 5498 <vxlan_rcv+0xc8>
static inline __be32 vxlan_vni(__be32 vni_field)
{
#if defined(__BIG_ENDIAN)
return (__force __be32)((__force u32)vni_field >> 8);
#else
return (__force __be32)((__force u32)(vni_field & VXLAN_VNI_MASK) << 8);
5498: 31 c0 xor %eax,%eax
549a: 48 8b 4c 24 58 mov 0x58(%rsp),%rcx
549f: 65 48 33 0c 25 28 00 xor %gs:0x28,%rcx
54a6: 00 00
vni = 0;
hlist_for_each_entry_rcu(vxlan, vni_head(vs, vni), hlist) {
54a8: 0f 85 e7 08 00 00 jne 5d95 <vxlan_rcv+0x9c5>
54ae: 48 8d 65 d8 lea -0x28(%rbp),%rsp
54b2: 5b pop %rbx
if (vxlan->default_dst.remote_vni == vni)
54b3: 41 5c pop %r12
54b5: 41 5d pop %r13
54b7: 41 5e pop %r14
54b9: 41 5f pop %r15
/* For flow based devices, map all packets to VNI 0 */
if (vs->flags & VXLAN_F_COLLECT_METADATA)
vni = 0;
hlist_for_each_entry_rcu(vxlan, vni_head(vs, vni), hlist) {
54bb: 5d pop %rbp
54bc: c3 retq
54bd: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
gro_cells_receive(&vxlan->gro_cells, skb);
return 0;
drop:
/* Consume bad packet */
kfree_skb(skb);
54c2: eb cc jmp 5490 <vxlan_rcv+0xc0>
54c4: 8b 48 08 mov 0x8(%rax),%ecx
54c7: 49 8b 74 24 20 mov 0x20(%r12),%rsi
return 0;
}
54cc: 45 89 f0 mov %r14d,%r8d
54cf: 41 0f c8 bswap %r8d
54d2: 48 c7 c2 00 00 00 00 mov $0x0,%rdx
54d9: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
54e0: 0f c9 bswap %ecx
54e2: e8 00 00 00 00 callq 54e7 <vxlan_rcv+0x117>
54e7: eb a7 jmp 5490 <vxlan_rcv+0xc0>
54e9: 31 d2 xor %edx,%edx
54eb: 31 c0 xor %eax,%eax
54ed: eb 86 jmp 5475 <vxlan_rcv+0xa5>
54ef: 81 e7 00 40 00 00 and $0x4000,%edi
goto drop;
unparsed = *vxlan_hdr(skb);
/* VNI flag always required to be set */
if (!(unparsed.vx_flags & VXLAN_HF_VNI)) {
netdev_dbg(skb->dev, "invalid vxlan flags=%#x vni=%#x\n",
54f5: 0f 85 74 01 00 00 jne 566f <vxlan_rcv+0x29f>
54fb: ba 65 58 00 00 mov $0x5865,%edx
5500: 31 c9 xor %ecx,%ecx
5502: c6 44 24 27 00 movb $0x0,0x27(%rsp)
5507: 48 8b 43 30 mov 0x30(%rbx),%rax
550b: 48 8b 7b 38 mov 0x38(%rbx),%rdi
550f: 45 31 c0 xor %r8d,%r8d
5512: be 10 00 00 00 mov $0x10,%esi
5517: 48 39 b8 80 04 00 00 cmp %rdi,0x480(%rax)
{
struct vxlan_dev *vxlan;
/* For flow based devices, map all packets to VNI 0 */
if (vs->flags & VXLAN_F_COLLECT_METADATA)
vni = 0;
551e: 4c 89 e7 mov %r12,%rdi
goto drop;
/* For backwards compatibility, only allow reserved fields to be
* used by VXLAN extensions if explicitly requested.
*/
if (vs->flags & VXLAN_F_GPE) {
5521: 41 0f 95 c0 setne %r8b
5525: e8 00 00 00 00 callq 552a <vxlan_rcv+0x15a>
552a: 85 c0 test %eax,%eax
552c: 0f 85 5e ff ff ff jne 5490 <vxlan_rcv+0xc0>
struct vxlan_sock *vs;
struct vxlanhdr unparsed;
struct vxlan_metadata _md;
struct vxlan_metadata *md = &_md;
__be16 protocol = htons(ETH_P_TEB);
bool raw_proto = false;
5532: 41 8b 97 1c 20 00 00 mov 0x201c(%r15),%edx
5539: f6 c6 20 test $0x20,%dh
if (!vxlan_parse_gpe_hdr(&unparsed, &protocol, skb, vs->flags))
goto drop;
raw_proto = true;
}
if (__iptunnel_pull_header(skb, VXLAN_HLEN, protocol, raw_proto,
553c: 0f 85 b4 01 00 00 jne 56f6 <vxlan_rcv+0x326>
5542: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
5547: 4c 8d 44 24 2c lea 0x2c(%rsp),%r8
554c: c7 44 24 2c 00 00 00 movl $0x0,0x2c(%rsp)
5553: 00
5554: 44 89 f0 mov %r14d,%eax
5557: 44 89 e9 mov %r13d,%ecx
555a: 25 00 00 00 ff and $0xff000000,%eax
555f: f6 c6 04 test $0x4,%dh
5562: 89 c7 mov %eax,%edi
5564: 0f 85 e8 01 00 00 jne 5752 <vxlan_rcv+0x382>
/* salt for hash table */
static u32 vxlan_salt __read_mostly;
static inline bool vxlan_collect_metadata(struct vxlan_sock *vs)
{
return vs->flags & VXLAN_F_COLLECT_METADATA ||
556a: f6 c6 08 test $0x8,%dh
556d: 0f 85 25 03 00 00 jne 5898 <vxlan_rcv+0x4c8>
5573: 85 c9 test %ecx,%ecx
5575: 0f 85 15 ff ff ff jne 5490 <vxlan_rcv+0xc0>
struct pcpu_sw_netstats *stats;
struct vxlan_dev *vxlan;
struct vxlan_sock *vs;
struct vxlanhdr unparsed;
struct vxlan_metadata _md;
struct vxlan_metadata *md = &_md;
557b: 85 ff test %edi,%edi
md = ip_tunnel_info_opts(&tun_dst->u.tun_info);
skb_dst_set(skb, (struct dst_entry *)tun_dst);
} else {
memset(md, 0, sizeof(*md));
557d: 0f 85 0d ff ff ff jne 5490 <vxlan_rcv+0xc0>
5583: 80 7c 24 27 00 cmpb $0x0,0x27(%rsp)
ntohl(vxlan_hdr(skb)->vx_vni));
/* Return non vxlan pkt */
goto drop;
}
unparsed.vx_flags &= ~VXLAN_HF_VNI;
unparsed.vx_vni &= ~VXLAN_VNI_MASK;
5588: 0f 84 85 03 00 00 je 5913 <vxlan_rcv+0x543>
558e: 49 8b 8c 24 d0 00 00 mov 0xd0(%r12),%rcx
5595: 00
skb_dst_set(skb, (struct dst_entry *)tun_dst);
} else {
memset(md, 0, sizeof(*md));
}
if (vs->flags & VXLAN_F_REMCSUM_RX)
5596: 49 8b 84 24 d8 00 00 mov 0xd8(%r12),%rax
559d: 00
if (!vxlan_remcsum(&unparsed, skb, vs->flags))
goto drop;
if (vs->flags & VXLAN_F_GBP)
559e: 45 0f b7 ac 24 c4 00 movzwl 0xc4(%r12),%r13d
55a5: 00 00
vxlan_parse_gbp_hdr(&unparsed, skb, vs->flags, md);
/* Note that GBP and GPE can never be active together. This is
* ensured in vxlan_dev_configure.
*/
if (unparsed.vx_flags || unparsed.vx_vni) {
55a7: 48 29 c8 sub %rcx,%rax
55aa: 66 41 89 84 24 c6 00 mov %ax,0xc6(%r12)
55b1: 00 00
* adding extensions to VXLAN.
*/
goto drop;
}
if (!raw_proto) {
55b3: 48 8b 73 30 mov 0x30(%rbx),%rsi
55b7: 89 c2 mov %eax,%edx
55b9: 41 80 a4 24 90 00 00 andb $0xf8,0x90(%r12)
55c0: 00 f8
return skb->mac_header != (typeof(skb->mac_header))~0U;
}
static inline void skb_reset_mac_header(struct sk_buff *skb)
{
skb->mac_header = skb->data - skb->head;
55c2: 49 89 74 24 20 mov %rsi,0x20(%r12)
55c7: 66 41 89 94 24 c4 00 mov %dx,0xc4(%r12)
55ce: 00 00
55d0: 49 01 cd add %rcx,%r13
55d3: 49 8b 57 10 mov 0x10(%r15),%rdx
55d7: 48 8b 52 20 mov 0x20(%rdx),%rdx
55db: 0f b7 72 10 movzwl 0x10(%rdx),%esi
55df: 66 83 fe 02 cmp $0x2,%si
if (!vxlan_set_mac(vxlan, vs, skb))
goto drop;
} else {
skb_reset_mac_header(skb);
skb->dev = vxlan->dev;
55e3: 0f 84 0b 04 00 00 je 59f4 <vxlan_rcv+0x624>
skb->pkt_type = PACKET_HOST;
55e9: 41 0f b7 bc 24 c0 00 movzwl 0xc0(%r12),%edi
55f0: 00 00
if (!raw_proto) {
if (!vxlan_set_mac(vxlan, vs, skb))
goto drop;
} else {
skb_reset_mac_header(skb);
skb->dev = vxlan->dev;
55f2: 66 83 ff 08 cmp $0x8,%di
55f6: 0f 84 84 05 00 00 je 5b80 <vxlan_rcv+0x7b0>
return skb->head + skb->network_header;
}
static inline void skb_reset_network_header(struct sk_buff *skb)
{
skb->network_header = skb->data - skb->head;
55fc: 66 81 ff 86 dd cmp $0xdd86,%di
skb->transport_header += offset;
}
static inline unsigned char *skb_network_header(const struct sk_buff *skb)
{
return skb->head + skb->network_header;
5601: 0f 84 78 04 00 00 je 5a7f <vxlan_rcv+0x6af>
return vni_field;
}
static inline unsigned short vxlan_get_sk_family(struct vxlan_sock *vs)
{
return vs->sock->sk->sk_family;
5607: 48 8b 43 30 mov 0x30(%rbx),%rax
560b: 48 8b 80 88 04 00 00 mov 0x488(%rax),%rax
static bool vxlan_ecn_decapsulate(struct vxlan_sock *vs, void *oiph,
struct sk_buff *skb)
{
int err = 0;
if (vxlan_get_sk_family(vs) == AF_INET)
5612: 65 48 03 05 00 00 00 add %gs:0x0(%rip),%rax # 561a <vxlan_rcv+0x24a>
5619: 00
static inline int IP6_ECN_decapsulate(const struct ipv6hdr *oipv6h,
struct sk_buff *skb)
{
__u8 inner;
if (skb->protocol == htons(ETH_P_IP))
561a: 48 83 00 01 addq $0x1,(%rax)
561e: 41 8b 94 24 80 00 00 mov 0x80(%r12),%edx
5625: 00
5626: 48 01 50 08 add %rdx,0x8(%rax)
562a: 48 8b 83 f0 00 00 00 mov 0xf0(%rbx),%rax
inner = ip_hdr(skb)->tos;
else if (skb->protocol == htons(ETH_P_IPV6))
5631: 48 85 c0 test %rax,%rax
5634: 74 2c je 5662 <vxlan_rcv+0x292>
5636: 41 f6 84 24 8e 00 00 testb $0x1,0x8e(%r12)
563d: 00 01
++vxlan->dev->stats.rx_frame_errors;
++vxlan->dev->stats.rx_errors;
goto drop;
}
stats = this_cpu_ptr(vxlan->dev->tstats);
563f: 0f 84 72 05 00 00 je 5bb7 <vxlan_rcv+0x7e7>
5645: 41 8b 94 24 cc 00 00 mov 0xcc(%r12),%edx
564c: 00
u64_stats_update_begin(&stats->syncp);
stats->rx_packets++;
564d: 49 03 94 24 d0 00 00 add 0xd0(%r12),%rdx
5654: 00
stats->rx_bytes += skb->len;
5655: 8b 52 20 mov 0x20(%rdx),%edx
5658: 66 83 fa 01 cmp $0x1,%dx
static inline int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
{
struct gro_cell *cell;
struct net_device *dev = skb->dev;
if (!gcells->cells || skb_cloned(skb) || !(dev->features & NETIF_F_GRO))
565c: 0f 84 55 05 00 00 je 5bb7 <vxlan_rcv+0x7e7>
5662: 4c 89 e7 mov %r12,%rdi
5665: e8 00 00 00 00 callq 566a <vxlan_rcv+0x29a>
* one of multiple shared copies of the buffer. Cloned buffers are
* shared data so must not be written to under normal circumstances.
*/
static inline int skb_cloned(const struct sk_buff *skb)
{
return skb->cloned &&
566a: e9 29 fe ff ff jmpq 5498 <vxlan_rcv+0xc8>
566f: 40 f6 c6 04 test $0x4,%sil
5673: 0f 84 17 fe ff ff je 5490 <vxlan_rcv+0xc0>
};
#ifdef NET_SKBUFF_DATA_USES_OFFSET
static inline unsigned char *skb_end_pointer(const struct sk_buff *skb)
{
return skb->head + skb->end;
5679: 40 f6 c6 31 test $0x31,%sil
567d: 0f 85 0d fe ff ff jne 5490 <vxlan_rcv+0xc0>
5683: 48 89 f0 mov %rsi,%rax
5686: 48 c1 e8 18 shr $0x18,%rax
* one of multiple shared copies of the buffer. Cloned buffers are
* shared data so must not be written to under normal circumstances.
*/
static inline int skb_cloned(const struct sk_buff *skb)
{
return skb->cloned &&
568a: 3c 02 cmp $0x2,%al
568c: 0f 84 77 02 00 00 je 5909 <vxlan_rcv+0x539>
return netif_rx(skb);
5692: 3c 03 cmp $0x3,%al
5694: 0f 84 65 02 00 00 je 58ff <vxlan_rcv+0x52f>
569a: 3c 01 cmp $0x1,%al
569c: ba 08 00 00 00 mov $0x8,%edx
struct sk_buff *skb, u32 vxflags)
{
struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)unparsed;
/* Need to have Next Protocol set for interfaces in GPE mode. */
if (!gpe->np_applied)
56a1: 0f 85 e9 fd ff ff jne 5490 <vxlan_rcv+0xc0>
56a7: 41 89 cd mov %ecx,%r13d
return false;
/* "When the O bit is set to 1, the packet is an OAM packet and OAM
* processing MUST occur." However, we don't implement OAM
* processing, thus drop the packet.
*/
if (gpe->oam_flag)
56aa: 48 b8 00 00 00 00 ff movabs $0xffffffff00000000,%rax
56b1: ff ff ff
return false;
switch (gpe->next_protocol) {
56b4: b9 01 00 00 00 mov $0x1,%ecx
56b9: 41 81 e5 c2 ff ff 00 and $0xffffc2,%r13d
56c0: 48 21 c6 and %rax,%rsi
56c3: c6 44 24 27 01 movb $0x1,0x27(%rsp)
56c8: 49 09 f5 or %rsi,%r13
56cb: e9 37 fe ff ff jmpq 5507 <vxlan_rcv+0x137>
56d0: 83 f8 0f cmp $0xf,%eax
56d3: 0f 86 b7 fd ff ff jbe 5490 <vxlan_rcv+0xc0>
break;
default:
return false;
}
unparsed->vx_flags &= ~VXLAN_GPE_USED_BITS;
56d9: be 10 00 00 00 mov $0x10,%esi
56de: 4c 89 e7 mov %r12,%rdi
56e1: 29 d6 sub %edx,%esi
56e3: e8 00 00 00 00 callq 56e8 <vxlan_rcv+0x318>
56e8: 48 85 c0 test %rax,%rax
56eb: 0f 84 9f fd ff ff je 5490 <vxlan_rcv+0xc0>
56f1: e9 21 fd ff ff jmpq 5417 <vxlan_rcv+0x47>
* used by VXLAN extensions if explicitly requested.
*/
if (vs->flags & VXLAN_F_GPE) {
if (!vxlan_parse_gpe_hdr(&unparsed, &protocol, skb, vs->flags))
goto drop;
raw_proto = true;
56f6: 41 0f b7 84 24 c2 00 movzwl 0xc2(%r12),%eax
56fd: 00 00
break;
default:
return false;
}
unparsed->vx_flags &= ~VXLAN_GPE_USED_BITS;
56ff: 49 8b 94 24 d0 00 00 mov 0xd0(%r12),%rdx
5706: 00
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
return 1;
if (unlikely(len > skb->len))
5707: 41 b8 04 00 00 00 mov $0x4,%r8d
return 0;
return __pskb_pull_tail(skb, len - skb_headlen(skb)) != NULL;
570d: 4c 89 e7 mov %r12,%rdi
5710: 8b 4c 02 0c mov 0xc(%rdx,%rax,1),%ecx
5714: 49 8b 47 10 mov 0x10(%r15),%rax
__be16 protocol = htons(ETH_P_TEB);
bool raw_proto = false;
void *oiph;
/* Need UDP and VXLAN header to be present */
if (!pskb_may_pull(skb, VXLAN_HLEN))
5718: ba 00 04 00 00 mov $0x400,%edx
571d: 48 8b 40 20 mov 0x20(%rax),%rax
5721: c1 e1 08 shl $0x8,%ecx
5724: 48 c1 e1 20 shl $0x20,%rcx
if (__iptunnel_pull_header(skb, VXLAN_HLEN, protocol, raw_proto,
!net_eq(vxlan->net, dev_net(vxlan->dev))))
goto drop;
if (vxlan_collect_metadata(vs)) {
__be32 vni = vxlan_vni(vxlan_hdr(skb)->vx_vni);
5728: 0f b7 70 10 movzwl 0x10(%rax),%esi
572c: e8 00 00 00 00 callq 5731 <vxlan_rcv+0x361>
5731: 48 85 c0 test %rax,%rax
5734: 0f 84 56 fd ff ff je 5490 <vxlan_rcv+0xc0>
struct metadata_dst *tun_dst;
tun_dst = udp_tun_rx_dst(skb, vxlan_get_sk_family(vs), TUNNEL_KEY,
573a: 49 89 44 24 58 mov %rax,0x58(%r12)
573f: 4c 8d 80 f0 00 00 00 lea 0xf0(%rax),%r8
5746: 41 8b 97 1c 20 00 00 mov 0x201c(%r15),%edx
574d: e9 02 fe ff ff jmpq 5554 <vxlan_rcv+0x184>
5752: 41 f7 c5 00 20 00 00 test $0x2000,%r13d
5759: 45 89 e9 mov %r13d,%r9d
575c: 0f 84 17 01 00 00 je 5879 <vxlan_rcv+0x4a9>
vxlan_vni_to_tun_id(vni), sizeof(*md));
if (!tun_dst)
5762: 41 f6 84 24 93 00 00 testb $0x10,0x93(%r12)
5769: 00 10
* Sets skb dst, assuming a reference was taken on dst and should
* be released by skb_dst_drop()
*/
static inline void skb_dst_set(struct sk_buff *skb, struct dst_entry *dst)
{
skb->_skb_refdst = (unsigned long)dst;
576b: 0f 85 08 01 00 00 jne 5879 <vxlan_rcv+0x4a9>
}
}
static inline void *ip_tunnel_info_opts(struct ip_tunnel_info *info)
{
return info + 1;
5771: 41 81 e6 00 00 00 7f and $0x7f000000,%r14d
5778: 48 98 cltq
577a: 41 8b bc 24 80 00 00 mov 0x80(%r12),%edi
5781: 00
static bool vxlan_remcsum(struct vxlanhdr *unparsed,
struct sk_buff *skb, u32 vxflags)
{
size_t start, offset;
if (!(unparsed->vx_flags & VXLAN_HF_RCO) || skb->remcsum_offload)
5782: 41 0f ce bswap %r14d
5785: 48 c1 f8 3f sar $0x3f,%rax
5789: 43 8d 34 36 lea (%r14,%r14,1),%esi
578d: 48 83 e0 f6 and $0xfffffffffffffff6,%rax
5791: 48 8d 44 06 10 lea 0x10(%rsi,%rax,1),%rax
5796: 49 89 f6 mov %rsi,%r14
5799: 89 fe mov %edi,%esi
579b: 41 2b b4 24 84 00 00 sub 0x84(%r12),%esi
57a2: 00
#endif
}
static inline size_t vxlan_rco_start(__be32 vni_field)
{
return be32_to_cpu(vni_field & VXLAN_RCO_MASK) << VXLAN_RCO_SHIFT;
57a3: 48 89 44 24 18 mov %rax,0x18(%rsp)
}
static inline size_t vxlan_rco_offset(__be32 vni_field)
{
return (vni_field & VXLAN_RCO_UDP) ?
offsetof(struct udphdr, check) :
57a8: 83 c0 02 add $0x2,%eax
57ab: 39 f0 cmp %esi,%eax
57ad: 0f 87 65 05 00 00 ja 5d18 <vxlan_rcv+0x948>
#endif
}
static inline size_t vxlan_rco_start(__be32 vni_field)
{
return be32_to_cpu(vni_field & VXLAN_RCO_MASK) << VXLAN_RCO_SHIFT;
57b3: 41 0f b7 84 24 c2 00 movzwl 0xc2(%r12),%eax
57ba: 00 00
57bc: 49 8b b4 24 d0 00 00 mov 0xd0(%r12),%rsi
57c3: 00
goto out;
start = vxlan_rco_start(unparsed->vx_vni);
offset = start + vxlan_rco_offset(unparsed->vx_vni);
57c4: 80 e6 10 and $0x10,%dh
57c7: 4c 8d 54 06 10 lea 0x10(%rsi,%rax,1),%r10
return skb->data_len;
}
static inline unsigned int skb_headlen(const struct sk_buff *skb)
{
return skb->len - skb->data_len;
57cc: 0f 84 09 05 00 00 je 5cdb <vxlan_rcv+0x90b>
57d2: 41 0f b6 84 24 91 00 movzbl 0x91(%r12),%eax
57d9: 00 00
return unlikely(len > skb->len) ? NULL : __pskb_pull(skb, len);
}
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
57db: 83 e0 06 and $0x6,%eax
57de: 3c 04 cmp $0x4,%al
57e0: 0f 85 58 06 00 00 jne 5e3e <vxlan_rcv+0xa6e>
if (!pskb_may_pull(skb, offset + sizeof(u16)))
return false;
skb_remcsum_process(skb, (void *)(vxlan_hdr(skb) + 1), start, offset,
57e6: 45 8b 9c 24 98 00 00 mov 0x98(%r12),%r11d
57ed: 00
57ee: 48 63 44 24 18 movslq 0x18(%rsp),%rax
57f3: 31 d2 xor %edx,%edx
static inline void skb_remcsum_process(struct sk_buff *skb, void *ptr,
int start, int offset, bool nopartial)
{
__wsum delta;
if (!nopartial) {
57f5: 44 89 f6 mov %r14d,%esi
57f8: 4c 89 d7 mov %r10,%rdi
57fb: 44 89 5c 24 08 mov %r11d,0x8(%rsp)
5800: 44 89 4c 24 20 mov %r9d,0x20(%rsp)
skb_remcsum_adjust_partial(skb, ptr, start, offset);
return;
}
if (unlikely(skb->ip_summed != CHECKSUM_COMPLETE)) {
5805: 4c 89 44 24 10 mov %r8,0x10(%rsp)
580a: 49 8d 0c 02 lea (%r10,%rax,1),%rcx
580e: 48 89 4c 24 18 mov %rcx,0x18(%rsp)
5813: e8 00 00 00 00 callq 5818 <vxlan_rcv+0x448>
5818: 48 8b 4c 24 18 mov 0x18(%rsp),%rcx
581d: f7 d0 not %eax
581f: 44 8b 5c 24 08 mov 0x8(%rsp),%r11d
__wsum delta;
/* Subtract out checksum up to start */
csum = csum_sub(csum, csum_partial(ptr, start, 0));
5824: 41 01 c3 add %eax,%r11d
5827: 41 83 d3 00 adc $0x0,%r11d
582b: 44 89 da mov %r11d,%edx
582e: 66 45 31 db xor %r11w,%r11w
5832: 44 8b 4c 24 20 mov 0x20(%rsp),%r9d
5837: 0f b7 31 movzwl (%rcx),%esi
}
static inline __wsum remcsum_adjust(void *ptr, __wsum csum,
int start, int offset)
{
__sum16 *psum = (__sum16 *)(ptr + offset);
583a: c1 e2 10 shl $0x10,%edx
583d: 44 89 d8 mov %r11d,%eax
5840: 01 d0 add %edx,%eax
5842: 15 ff ff 00 00 adc $0xffff,%eax
__wsum delta;
/* Subtract out checksum up to start */
csum = csum_sub(csum, csum_partial(ptr, start, 0));
5847: f7 d0 not %eax
5849: 4c 8b 44 24 10 mov 0x10(%rsp),%r8
584e: c1 e8 10 shr $0x10,%eax
5851: 66 89 01 mov %ax,(%rcx)
5854: 89 c2 mov %eax,%edx
5856: f7 d6 not %esi
5858: 41 8b 84 24 98 00 00 mov 0x98(%r12),%eax
585f: 00
{
asm(" addl %1,%0\n"
" adcl $0xffff,%0"
: "=r" (sum)
: "r" ((__force u32)sum << 16),
"0" ((__force u32)sum & 0xffff0000));
5860: 01 f2 add %esi,%edx
5862: 83 d2 00 adc $0x0,%edx
5865: 01 d0 add %edx,%eax
csum_ipv6_magic(const struct in6_addr *saddr, const struct in6_addr *daddr,
__u32 len, __u8 proto, __wsum sum);
static inline unsigned add32_with_carry(unsigned a, unsigned b)
{
asm("addl %2,%0\n\t"
5867: 83 d0 00 adc $0x0,%eax
static inline __sum16 csum_fold(__wsum sum)
{
asm(" addl %1,%0\n"
" adcl $0xffff,%0"
: "=r" (sum)
: "r" ((__force u32)sum << 16),
586a: 41 89 84 24 98 00 00 mov %eax,0x98(%r12)
5871: 00
* the last step before putting a checksum into a packet.
* Make sure not to mix with 64bit checksums.
*/
static inline __sum16 csum_fold(__wsum sum)
{
asm(" addl %1,%0\n"
5872: 41 8b 97 1c 20 00 00 mov 0x201c(%r15),%edx
csum_ipv6_magic(const struct in6_addr *saddr, const struct in6_addr *daddr,
__u32 len, __u8 proto, __wsum sum);
static inline unsigned add32_with_carry(unsigned a, unsigned b)
{
asm("addl %2,%0\n\t"
5879: 44 89 c9 mov %r9d,%ecx
587c: 48 b8 00 00 00 00 ff movabs $0xffffffff00000000,%rax
5883: ff ff ff
5886: 31 ff xor %edi,%edi
5888: 80 e5 df and $0xdf,%ch
588b: 49 21 c5 and %rax,%r13
588e: 89 ce mov %ecx,%esi
5890: 49 09 f5 or %rsi,%r13
5893: e9 d2 fc ff ff jmpq 556a <vxlan_rcv+0x19a>
5898: f6 c1 80 test $0x80,%cl
}
delta = remcsum_adjust(ptr, skb->csum, start, offset);
/* Adjust skb->csum since we changed the packet */
skb->csum = csum_add(skb->csum, delta);
589b: 74 5a je 58f7 <vxlan_rcv+0x527>
589d: 4c 89 ee mov %r13,%rsi
58a0: 48 c1 ee 10 shr $0x10,%rsi
58a4: 66 c1 c6 08 rol $0x8,%si
58a8: 0f b7 f6 movzwl %si,%esi
!!(vxflags & VXLAN_F_REMCSUM_NOPARTIAL));
out:
unparsed->vx_flags &= ~VXLAN_HF_RCO;
58ab: 41 89 30 mov %esi,(%r8)
58ae: 49 8b 74 24 58 mov 0x58(%r12),%rsi
58b3: 48 83 e6 fe and $0xfffffffffffffffe,%rsi
unparsed->vx_vni &= VXLAN_VNI_MASK;
58b7: 74 0f je 58c8 <vxlan_rcv+0x4f8>
return false;
skb_remcsum_process(skb, (void *)(vxlan_hdr(skb) + 1), start, offset,
!!(vxflags & VXLAN_F_REMCSUM_NOPARTIAL));
out:
unparsed->vx_flags &= ~VXLAN_HF_RCO;
58b9: 66 83 8e c8 00 00 00 orw $0x10,0xc8(%rsi)
58c0: 10
58c1: c6 86 e8 00 00 00 04 movb $0x4,0xe8(%rsi)
struct vxlan_metadata *md)
{
struct vxlanhdr_gbp *gbp = (struct vxlanhdr_gbp *)unparsed;
struct metadata_dst *tun_dst;
if (!(unparsed->vx_flags & VXLAN_HF_GBP))
58c8: 44 89 e8 mov %r13d,%eax
58cb: 0f b6 f4 movzbl %ah,%esi
goto out;
md->gbp = ntohs(gbp->policy_id);
58ce: 40 f6 c6 40 test $0x40,%sil
58d2: 74 07 je 58db <vxlan_rcv+0x50b>
58d4: 41 81 08 00 00 40 00 orl $0x400000,(%r8)
58db: 83 e6 08 and $0x8,%esi
tun_dst = (struct metadata_dst *)skb_dst(skb);
if (tun_dst) {
58de: 74 07 je 58e7 <vxlan_rcv+0x517>
58e0: 41 81 08 00 00 08 00 orl $0x80000,(%r8)
58e7: 80 e6 20 and $0x20,%dh
tun_dst->u.tun_info.key.tun_flags |= TUNNEL_VXLAN_OPT;
58ea: 75 0b jne 58f7 <vxlan_rcv+0x527>
58ec: 41 8b 10 mov (%r8),%edx
58ef: 41 89 94 24 b4 00 00 mov %edx,0xb4(%r12)
58f6: 00
tun_dst->u.tun_info.options_len = sizeof(*md);
58f7: 83 e1 7f and $0x7f,%ecx
}
if (gbp->dont_learn)
58fa: e9 74 fc ff ff jmpq 5573 <vxlan_rcv+0x1a3>
58ff: ba 65 58 00 00 mov $0x5865,%edx
md->gbp |= VXLAN_GBP_DONT_LEARN;
5904: e9 9e fd ff ff jmpq 56a7 <vxlan_rcv+0x2d7>
5909: ba 86 dd 00 00 mov $0xdd86,%edx
if (gbp->policy_applied)
590e: e9 94 fd ff ff jmpq 56a7 <vxlan_rcv+0x2d7>
md->gbp |= VXLAN_GBP_POLICY_APPLIED;
5913: 49 8b 84 24 d8 00 00 mov 0xd8(%r12),%rax
591a: 00
/* In flow-based mode, GBP is carried in dst_metadata */
if (!(vxflags & VXLAN_F_COLLECT_METADATA))
591b: 49 2b 84 24 d0 00 00 sub 0xd0(%r12),%rax
5922: 00
skb->mark = md->gbp;
5923: 4c 89 e7 mov %r12,%rdi
5926: 66 41 89 84 24 c6 00 mov %ax,0xc6(%r12)
592d: 00 00
out:
unparsed->vx_flags &= ~VXLAN_GBP_USED_BITS;
592f: 48 8b 73 30 mov 0x30(%rbx),%rsi
5933: e8 00 00 00 00 callq 5938 <vxlan_rcv+0x568>
5938: 66 41 89 84 24 c0 00 mov %ax,0xc0(%r12)
593f: 00 00
* processing, thus drop the packet.
*/
if (gpe->oam_flag)
return false;
switch (gpe->next_protocol) {
5941: 41 0f b6 84 24 91 00 movzbl 0x91(%r12),%eax
5948: 00 00
return skb->mac_header != (typeof(skb->mac_header))~0U;
}
static inline void skb_reset_mac_header(struct sk_buff *skb)
{
skb->mac_header = skb->data - skb->head;
594a: 41 0f b7 b4 24 c6 00 movzwl 0xc6(%r12),%esi
5951: 00 00
struct sk_buff *skb)
{
union vxlan_addr saddr;
skb_reset_mac_header(skb);
skb->protocol = eth_type_trans(skb, vxlan->dev);
5953: 49 8b 8c 24 d0 00 00 mov 0xd0(%r12),%rcx
595a: 00
595b: 89 c2 mov %eax,%edx
595d: 83 e2 06 and $0x6,%edx
5960: 48 8d 3c 31 lea (%rcx,%rsi,1),%rdi
5964: 80 fa 04 cmp $0x4,%dl
5967: 0f 84 e9 03 00 00 je 5d56 <vxlan_rcv+0x986>
596d: 80 fa 06 cmp $0x6,%dl
5970: 0f 84 18 02 00 00 je 5b8e <vxlan_rcv+0x7be>
static __always_inline void
__skb_postpull_rcsum(struct sk_buff *skb, const void *start, unsigned int len,
unsigned int off)
{
if (skb->ip_summed == CHECKSUM_COMPLETE)
5976: 48 8b 43 30 mov 0x30(%rbx),%rax
skb->network_header += offset;
}
static inline unsigned char *skb_mac_header(const struct sk_buff *skb)
{
return skb->head + skb->mac_header;
597a: 8b 57 06 mov 0x6(%rdi),%edx
597d: 4c 8b 80 38 03 00 00 mov 0x338(%rax),%r8
5984: 0f b7 47 0a movzwl 0xa(%rdi),%eax
5988: 66 41 33 40 04 xor 0x4(%r8),%ax
static __always_inline void
__skb_postpull_rcsum(struct sk_buff *skb, const void *start, unsigned int len,
unsigned int off)
{
if (skb->ip_summed == CHECKSUM_COMPLETE)
598d: 41 33 10 xor (%r8),%edx
skb->network_header += offset;
}
static inline unsigned char *skb_mac_header(const struct sk_buff *skb)
{
return skb->head + skb->mac_header;
5990: 0f b7 c0 movzwl %ax,%eax
5993: 09 c2 or %eax,%edx
static __always_inline void
__skb_postpull_rcsum(struct sk_buff *skb, const void *start, unsigned int len,
unsigned int off)
{
if (skb->ip_summed == CHECKSUM_COMPLETE)
5995: 0f 84 f5 fa ff ff je 5490 <vxlan_rcv+0xc0>
599b: 49 8b 47 10 mov 0x10(%r15),%rax
skb->csum = csum_block_sub(skb->csum,
csum_partial(start, len, 0), off);
else if (skb->ip_summed == CHECKSUM_PARTIAL &&
599f: 45 0f b7 ac 24 c4 00 movzwl 0xc4(%r12),%r13d
59a6: 00 00
skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
/* Ignore packet loops (and multicast echo) */
if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))
59a8: 48 8b 40 20 mov 0x20(%rax),%rax
59ac: 66 83 78 10 02 cmpw $0x2,0x10(%rax)
59b1: 0f 84 ff 02 00 00 je 5cb6 <vxlan_rcv+0x8e6>
59b7: 4a 8b 44 29 08 mov 0x8(%rcx,%r13,1),%rax
59bc: 4a 8b 54 29 10 mov 0x10(%rcx,%r13,1),%rdx
59c1: 48 89 44 24 38 mov %rax,0x38(%rsp)
59c6: b8 0a 00 00 00 mov $0xa,%eax
return vni_field;
}
static inline unsigned short vxlan_get_sk_family(struct vxlan_sock *vs)
{
return vs->sock->sk->sk_family;
59cb: 48 89 54 24 40 mov %rdx,0x40(%rsp)
return false;
/* Get address from the outer IP header */
if (vxlan_get_sk_family(vs) == AF_INET) {
saddr.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
59d0: 66 89 44 24 30 mov %ax,0x30(%rsp)
59d5: f6 83 98 00 00 00 01 testb $0x1,0x98(%rbx)
/* Ignore packet loops (and multicast echo) */
if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))
return false;
/* Get address from the outer IP header */
if (vxlan_get_sk_family(vs) == AF_INET) {
59dc: 0f 85 a2 02 00 00 jne 5c84 <vxlan_rcv+0x8b4>
59e2: 49 8b 84 24 d8 00 00 mov 0xd8(%r12),%rax
59e9: 00
saddr.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
saddr.sa.sa_family = AF_INET;
#if IS_ENABLED(CONFIG_IPV6)
} else {
saddr.sin6.sin6_addr = ipv6_hdr(skb)->saddr;
59ea: 48 29 c8 sub %rcx,%rax
59ed: 89 c2 mov %eax,%edx
59ef: e9 d3 fb ff ff jmpq 55c7 <vxlan_rcv+0x1f7>
59f4: 45 0f b7 84 24 c0 00 movzwl 0xc0(%r12),%r8d
59fb: 00 00
59fd: 66 41 83 f8 08 cmp $0x8,%r8w
saddr.sa.sa_family = AF_INET6;
5a02: 0f 84 c6 02 00 00 je 5cce <vxlan_rcv+0x8fe>
#endif
}
if ((vxlan->flags & VXLAN_F_LEARN) &&
5a08: 66 41 81 f8 86 dd cmp $0xdd86,%r8w
5a0e: 0f 85 f3 fb ff ff jne 5607 <vxlan_rcv+0x237>
5a14: 0f b7 c0 movzwl %ax,%eax
5a17: 0f b7 3c 01 movzwl (%rcx,%rax,1),%edi
5a1b: 66 c1 c7 08 rol $0x8,%di
5a1f: 66 c1 ef 04 shr $0x4,%di
5a23: 83 e7 03 and $0x3,%edi
static inline int IP_ECN_decapsulate(const struct iphdr *oiph,
struct sk_buff *skb)
{
__u8 inner;
if (skb->protocol == htons(ETH_P_IP))
5a26: 41 0f b6 55 01 movzbl 0x1(%r13),%edx
5a2b: 0f 85 2b 02 00 00 jne 5c5c <vxlan_rcv+0x88c>
5a31: 83 e2 03 and $0x3,%edx
5a34: 0f 84 cd fb ff ff je 5607 <vxlan_rcv+0x237>
inner = ip_hdr(skb)->tos;
else if (skb->protocol == htons(ETH_P_IPV6))
5a3a: 80 fa 02 cmp $0x2,%dl
5a3d: 0f 86 ea 01 00 00 jbe 5c2d <vxlan_rcv+0x85d>
5a43: 80 fa 03 cmp $0x3,%dl
5a46: 0f 85 bb fb ff ff jne 5607 <vxlan_rcv+0x237>
5a4c: 80 3d 00 00 00 00 00 cmpb $0x0,0x0(%rip) # 5a53 <vxlan_rcv+0x683>
* 2 if packet should be dropped
*/
static inline int INET_ECN_decapsulate(struct sk_buff *skb,
__u8 outer, __u8 inner)
{
if (INET_ECN_is_not_ect(inner)) {
5a53: 74 0d je 5a62 <vxlan_rcv+0x692>
5a55: e8 00 00 00 00 callq 5a5a <vxlan_rcv+0x68a>
else if (skb->protocol == htons(ETH_P_IPV6))
inner = ipv6_get_dsfield(ipv6_hdr(skb));
else
return 0;
return INET_ECN_decapsulate(skb, oiph->tos, inner);
5a5a: 85 c0 test %eax,%eax
* 2 if packet should be dropped
*/
static inline int INET_ECN_decapsulate(struct sk_buff *skb,
__u8 outer, __u8 inner)
{
if (INET_ECN_is_not_ect(inner)) {
5a5c: 0f 85 55 04 00 00 jne 5eb7 <vxlan_rcv+0xae7>
switch (outer & INET_ECN_MASK) {
5a62: 48 8b 43 30 mov 0x30(%rbx),%rax
5a66: 48 83 80 98 01 00 00 addq $0x1,0x198(%rax)
5a6d: 01
5a6e: 48 8b 43 30 mov 0x30(%rbx),%rax
5a72: 48 83 80 50 01 00 00 addq $0x1,0x150(%rax)
5a79: 01
5a7a: e9 11 fa ff ff jmpq 5490 <vxlan_rcv+0xc0>
#if IS_ENABLED(CONFIG_IPV6)
else
err = IP6_ECN_decapsulate(oiph, skb);
#endif
if (unlikely(err) && log_ecn_error) {
5a7f: 0f b7 c0 movzwl %ax,%eax
5a82: 44 0f b7 04 01 movzwl (%rcx,%rax,1),%r8d
if (vxlan_get_sk_family(vs) == AF_INET)
net_info_ratelimited("non-ECT from %pI4 with TOS=%#x\n",
5a87: 66 41 c1 c0 08 rol $0x8,%r8w
5a8c: 66 41 c1 e8 04 shr $0x4,%r8w
5a91: 41 0f b7 55 00 movzwl 0x0(%r13),%edx
oiph = skb_network_header(skb);
skb_reset_network_header(skb);
if (!vxlan_ecn_decapsulate(vs, oiph, skb)) {
++vxlan->dev->stats.rx_frame_errors;
5a96: 66 c1 c2 08 rol $0x8,%dx
5a9a: 66 c1 ea 04 shr $0x4,%dx
++vxlan->dev->stats.rx_errors;
5a9e: 41 83 e0 03 and $0x3,%r8d
5aa2: 75 52 jne 5af6 <vxlan_rcv+0x726>
5aa4: 83 e2 03 and $0x3,%edx
5aa7: 0f 84 5a fb ff ff je 5607 <vxlan_rcv+0x237>
goto drop;
5aad: 80 fa 02 cmp $0x2,%dl
5ab0: 0f 86 77 01 00 00 jbe 5c2d <vxlan_rcv+0x85d>
5ab6: 80 fa 03 cmp $0x3,%dl
5ab9: 0f 85 48 fb ff ff jne 5607 <vxlan_rcv+0x237>
5abf: 80 3d 00 00 00 00 00 cmpb $0x0,0x0(%rip) # 5ac6 <vxlan_rcv+0x6f6>
5ac6: 74 9a je 5a62 <vxlan_rcv+0x692>
5ac8: e8 00 00 00 00 callq 5acd <vxlan_rcv+0x6fd>
5acd: 85 c0 test %eax,%eax
* 2 if packet should be dropped
*/
static inline int INET_ECN_decapsulate(struct sk_buff *skb,
__u8 outer, __u8 inner)
{
if (INET_ECN_is_not_ect(inner)) {
5acf: 74 91 je 5a62 <vxlan_rcv+0x692>
5ad1: 41 be 02 00 00 00 mov $0x2,%r14d
switch (outer & INET_ECN_MASK) {
5ad7: 49 8d 75 08 lea 0x8(%r13),%rsi
5adb: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
5ae2: e8 00 00 00 00 callq 5ae7 <vxlan_rcv+0x717>
5ae7: 41 83 fe 02 cmp $0x2,%r14d
5aeb: 0f 84 71 ff ff ff je 5a62 <vxlan_rcv+0x692>
#if IS_ENABLED(CONFIG_IPV6)
else
err = IP6_ECN_decapsulate(oiph, skb);
#endif
if (unlikely(err) && log_ecn_error) {
5af1: e9 11 fb ff ff jmpq 5607 <vxlan_rcv+0x237>
5af6: 83 e2 03 and $0x3,%edx
if (vxlan_get_sk_family(vs) == AF_INET)
net_info_ratelimited("non-ECT from %pI4 with TOS=%#x\n",
&((struct iphdr *)oiph)->saddr,
((struct iphdr *)oiph)->tos);
else
net_info_ratelimited("non-ECT from %pI6\n",
5af9: 80 fa 03 cmp $0x3,%dl
5afc: 0f 85 05 fb ff ff jne 5607 <vxlan_rcv+0x237>
return 0;
case INET_ECN_ECT_0:
case INET_ECN_ECT_1:
return 1;
case INET_ECN_CE:
return 2;
5b02: 66 83 ff 08 cmp $0x8,%di
5b06: 0f 84 8e 02 00 00 je 5d9a <vxlan_rcv+0x9ca>
5b0c: 66 81 ff 86 dd cmp $0xdd86,%di
5b11: 0f 85 f0 fa ff ff jne 5607 <vxlan_rcv+0x237>
}
oiph = skb_network_header(skb);
skb_reset_network_header(skb);
if (!vxlan_ecn_decapsulate(vs, oiph, skb)) {
5b17: 41 8b 94 24 c8 00 00 mov 0xc8(%r12),%edx
5b1e: 00
5b1f: 48 01 c8 add %rcx,%rax
5b22: 48 8d 70 28 lea 0x28(%rax),%rsi
}
}
if (INET_ECN_is_ce(outer))
5b26: 48 01 d1 add %rdx,%rcx
5b29: 48 39 ce cmp %rcx,%rsi
5b2c: 0f 87 d5 fa ff ff ja 5607 <vxlan_rcv+0x237>
ipv6_change_dsfield(inner, INET_ECN_MASK, dscp);
}
static inline int INET_ECN_set_ce(struct sk_buff *skb)
{
switch (skb->protocol) {
5b32: 0f b7 10 movzwl (%rax),%edx
5b35: 66 c1 c2 08 rol $0x8,%dx
5b39: 80 e2 30 and $0x30,%dl
5b3c: 0f 84 c5 fa ff ff je 5607 <vxlan_rcv+0x237>
5b42: 8b 10 mov (%rax),%edx
5b44: 89 d1 mov %edx,%ecx
5b46: 80 cd 30 or $0x30,%ch
skb_tail_pointer(skb))
return IP_ECN_set_ce(ip_hdr(skb));
break;
case cpu_to_be16(ETH_P_IPV6):
if (skb_network_header(skb) + sizeof(struct ipv6hdr) <=
5b49: 89 08 mov %ecx,(%rax)
5b4b: 41 0f b6 84 24 91 00 movzbl 0x91(%r12),%eax
5b52: 00 00
5b54: 83 e0 06 and $0x6,%eax
5b57: 3c 04 cmp $0x4,%al
5b59: 0f 85 a8 fa ff ff jne 5607 <vxlan_rcv+0x237>
5b5f: 41 8b 84 24 98 00 00 mov 0x98(%r12),%eax
5b66: 00
5b67: f7 d2 not %edx
*/
static inline int IP6_ECN_set_ce(struct sk_buff *skb, struct ipv6hdr *iph)
{
__be32 from, to;
if (INET_ECN_is_not_ect(ipv6_get_dsfield(iph)))
5b69: 01 d0 add %edx,%eax
5b6b: 83 d0 00 adc $0x0,%eax
5b6e: 01 c8 add %ecx,%eax
5b70: 83 d0 00 adc $0x0,%eax
return 0;
from = *(__be32 *)iph;
5b73: 41 89 84 24 98 00 00 mov %eax,0x98(%r12)
5b7a: 00
to = from | htonl(INET_ECN_CE << 20);
*(__be32 *)iph = to;
if (skb->ip_summed == CHECKSUM_COMPLETE)
5b7b: e9 87 fa ff ff jmpq 5607 <vxlan_rcv+0x237>
5b80: 0f b7 c0 movzwl %ax,%eax
5b83: 44 0f b6 44 01 01 movzbl 0x1(%rcx,%rax,1),%r8d
5b89: e9 03 ff ff ff jmpq 5a91 <vxlan_rcv+0x6c1>
5b8e: 41 0f b7 94 24 98 00 movzwl 0x98(%r12),%edx
5b95: 00 00
5b97: 41 2b 94 24 d8 00 00 sub 0xd8(%r12),%edx
5b9e: 00
5b9f: 01 ca add %ecx,%edx
5ba1: 0f 89 cf fd ff ff jns 5976 <vxlan_rcv+0x5a6>
skb->csum = csum_add(csum_sub(skb->csum, (__force __wsum)from),
5ba7: 83 e0 f9 and $0xfffffff9,%eax
5baa: 41 88 84 24 91 00 00 mov %al,0x91(%r12)
5bb1: 00
struct sk_buff *skb)
{
__u8 inner;
if (skb->protocol == htons(ETH_P_IP))
inner = ip_hdr(skb)->tos;
5bb2: e9 bf fd ff ff jmpq 5976 <vxlan_rcv+0x5a6>
5bb7: 49 8b 54 24 20 mov 0x20(%r12),%rdx
5bbc: f6 82 f1 00 00 00 40 testb $0x40,0xf1(%rdx)
5bc3: 0f 84 99 fa ff ff je 5662 <vxlan_rcv+0x292>
5bc9: 65 48 03 05 00 00 00 add %gs:0x0(%rip),%rax # 5bd1 <vxlan_rcv+0x801>
5bd0: 00
5bd1: 8b 3d 00 00 00 00 mov 0x0(%rip),%edi # 5bd7 <vxlan_rcv+0x807>
skb_checksum_start_offset(skb) < 0)
skb->ip_summed = CHECKSUM_NONE;
5bd7: 39 78 10 cmp %edi,0x10(%rax)
5bda: 0f 87 49 02 00 00 ja 5e29 <vxlan_rcv+0xa59>
5be0: 48 8b 50 08 mov 0x8(%rax),%rdx
5be4: 49 89 04 24 mov %rax,(%r12)
};
static inline int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
{
struct gro_cell *cell;
struct net_device *dev = skb->dev;
5be8: 49 89 54 24 08 mov %rdx,0x8(%r12)
if (!gcells->cells || skb_cloned(skb) || !(dev->features & NETIF_F_GRO))
5bed: 4c 89 22 mov %r12,(%rdx)
5bf0: 8b 78 10 mov 0x10(%rax),%edi
5bf3: 4c 89 60 08 mov %r12,0x8(%rax)
5bf7: 8d 57 01 lea 0x1(%rdi),%edx
return netif_rx(skb);
cell = this_cpu_ptr(gcells->cells);
5bfa: 83 fa 01 cmp $0x1,%edx
5bfd: 89 50 10 mov %edx,0x10(%rax)
5c00: 0f 85 92 f8 ff ff jne 5498 <vxlan_rcv+0xc8>
if (skb_queue_len(&cell->napi_skbs) > netdev_max_backlog) {
5c06: 48 8b 50 28 mov 0x28(%rax),%rdx
5c0a: 48 8d 78 18 lea 0x18(%rax),%rdi
5c0e: 80 e2 02 and $0x2,%dl
static inline void __skb_queue_before(struct sk_buff_head *list,
struct sk_buff *next,
struct sk_buff *newsk)
{
__skb_insert(newsk, next->prev, next, list);
5c11: 0f 85 81 f8 ff ff jne 5498 <vxlan_rcv+0xc8>
struct sk_buff_head *list);
static inline void __skb_insert(struct sk_buff *newsk,
struct sk_buff *prev, struct sk_buff *next,
struct sk_buff_head *list)
{
newsk->next = next;
5c17: f0 0f ba 68 28 00 lock btsl $0x0,0x28(%rax)
newsk->prev = prev;
next->prev = prev->next = newsk;
5c1d: 0f 82 75 f8 ff ff jb 5498 <vxlan_rcv+0xc8>
5c23: e8 00 00 00 00 callq 5c28 <vxlan_rcv+0x858>
list->qlen++;
5c28: e9 6b f8 ff ff jmpq 5498 <vxlan_rcv+0xc8>
5c2d: 80 3d 00 00 00 00 00 cmpb $0x0,0x0(%rip) # 5c34 <vxlan_rcv+0x864>
kfree_skb(skb);
return NET_RX_DROP;
}
__skb_queue_tail(&cell->napi_skbs, skb);
if (skb_queue_len(&cell->napi_skbs) == 1)
5c34: 0f 84 cd f9 ff ff je 5607 <vxlan_rcv+0x237>
napi_schedule(&cell->napi);
5c3a: 66 83 fe 02 cmp $0x2,%si
* insure only one NAPI poll instance runs. We also make
* sure there is no pending NAPI disable.
*/
static inline bool napi_schedule_prep(struct napi_struct *n)
{
return !napi_disable_pending(n) &&
5c3e: 0f 84 b8 01 00 00 je 5dfc <vxlan_rcv+0xa2c>
5c44: e8 00 00 00 00 callq 5c49 <vxlan_rcv+0x879>
* This operation is atomic and cannot be reordered.
* It also implies a memory barrier.
*/
static __always_inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
{
GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", c);
5c49: 85 c0 test %eax,%eax
5c4b: 0f 84 b6 f9 ff ff je 5607 <vxlan_rcv+0x237>
5c51: 41 be 01 00 00 00 mov $0x1,%r14d
* running.
*/
static inline void napi_schedule(struct napi_struct *n)
{
if (napi_schedule_prep(n))
__napi_schedule(n);
5c57: e9 7b fe ff ff jmpq 5ad7 <vxlan_rcv+0x707>
5c5c: 83 e2 03 and $0x3,%edx
#if IS_ENABLED(CONFIG_IPV6)
else
err = IP6_ECN_decapsulate(oiph, skb);
#endif
if (unlikely(err) && log_ecn_error) {
5c5f: 80 fa 03 cmp $0x3,%dl
5c62: 0f 85 9f f9 ff ff jne 5607 <vxlan_rcv+0x237>
5c68: 66 41 83 f8 08 cmp $0x8,%r8w
if (vxlan_get_sk_family(vs) == AF_INET)
5c6d: 0f 84 27 01 00 00 je 5d9a <vxlan_rcv+0x9ca>
5c73: 66 41 81 f8 86 dd cmp $0xdd86,%r8w
net_info_ratelimited("non-ECT from %pI4 with TOS=%#x\n",
&((struct iphdr *)oiph)->saddr,
((struct iphdr *)oiph)->tos);
else
net_info_ratelimited("non-ECT from %pI6\n",
5c79: 0f 84 98 fe ff ff je 5b17 <vxlan_rcv+0x747>
5c7f: e9 83 f9 ff ff jmpq 5607 <vxlan_rcv+0x237>
5c84: 49 8b 7c 24 20 mov 0x20(%r12),%rdi
5c89: 48 8d 54 31 06 lea 0x6(%rcx,%rsi,1),%rdx
case INET_ECN_CE:
return 2;
}
}
if (INET_ECN_is_ce(outer))
5c8e: 48 8d 74 24 30 lea 0x30(%rsp),%rsi
5c93: e8 98 d6 ff ff callq 3330 <vxlan_snoop>
ipv6_change_dsfield(inner, INET_ECN_MASK, dscp);
}
static inline int INET_ECN_set_ce(struct sk_buff *skb)
{
switch (skb->protocol) {
5c98: 84 c0 test %al,%al
5c9a: 0f 85 f0 f7 ff ff jne 5490 <vxlan_rcv+0xc0>
5ca0: 49 8b 8c 24 d0 00 00 mov 0xd0(%r12),%rcx
5ca7: 00
5ca8: 45 0f b7 ac 24 c4 00 movzwl 0xc4(%r12),%r13d
5caf: 00 00
5cb1: e9 2c fd ff ff jmpq 59e2 <vxlan_rcv+0x612>
saddr.sa.sa_family = AF_INET6;
#endif
}
if ((vxlan->flags & VXLAN_F_LEARN) &&
vxlan_snoop(skb->dev, &saddr, eth_hdr(skb)->h_source))
5cb6: 42 8b 44 29 0c mov 0xc(%rcx,%r13,1),%eax
5cbb: ba 02 00 00 00 mov $0x2,%edx
5cc0: 66 89 54 24 30 mov %dx,0x30(%rsp)
5cc5: 89 44 24 34 mov %eax,0x34(%rsp)
saddr.sin6.sin6_addr = ipv6_hdr(skb)->saddr;
saddr.sa.sa_family = AF_INET6;
#endif
}
if ((vxlan->flags & VXLAN_F_LEARN) &&
5cc9: e9 07 fd ff ff jmpq 59d5 <vxlan_rcv+0x605>
5cce: 0f b7 c0 movzwl %ax,%eax
5cd1: 0f b6 7c 01 01 movzbl 0x1(%rcx,%rax,1),%edi
5cd6: e9 48 fd ff ff jmpq 5a23 <vxlan_rcv+0x653>
5cdb: 44 89 f0 mov %r14d,%eax
5cde: 41 80 8c 24 91 00 00 orb $0x6,0x91(%r12)
5ce5: 00 06
if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))
return false;
/* Get address from the outer IP header */
if (vxlan_get_sk_family(vs) == AF_INET) {
saddr.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
5ce7: 25 fe ff 00 00 and $0xfffe,%eax
saddr.sa.sa_family = AF_INET;
5cec: 4c 01 d0 add %r10,%rax
5cef: 48 29 f0 sub %rsi,%rax
5cf2: 66 41 89 84 24 98 00 mov %ax,0x98(%r12)
5cf9: 00 00
if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))
return false;
/* Get address from the outer IP header */
if (vxlan_get_sk_family(vs) == AF_INET) {
saddr.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
5cfb: 0f b7 44 24 18 movzwl 0x18(%rsp),%eax
struct sk_buff *skb)
{
__u8 inner;
if (skb->protocol == htons(ETH_P_IP))
inner = ip_hdr(skb)->tos;
5d00: 44 29 f0 sub %r14d,%eax
5d03: 66 41 89 84 24 9a 00 mov %ax,0x9a(%r12)
5d0a: 00 00
static inline void skb_remcsum_adjust_partial(struct sk_buff *skb, void *ptr,
u16 start, u16 offset)
{
skb->ip_summed = CHECKSUM_PARTIAL;
skb->csum_start = ((unsigned char *)ptr + start) - skb->head;
5d0c: 41 8b 97 1c 20 00 00 mov 0x201c(%r15),%edx
} while (0)
static inline void skb_remcsum_adjust_partial(struct sk_buff *skb, void *ptr,
u16 start, u16 offset)
{
skb->ip_summed = CHECKSUM_PARTIAL;
5d13: e9 61 fb ff ff jmpq 5879 <vxlan_rcv+0x4a9>
skb->csum_start = ((unsigned char *)ptr + start) - skb->head;
5d18: 39 c7 cmp %eax,%edi
5d1a: 89 54 24 08 mov %edx,0x8(%rsp)
5d1e: 44 89 6c 24 20 mov %r13d,0x20(%rsp)
5d23: 4c 89 44 24 10 mov %r8,0x10(%rsp)
5d28: 0f 82 62 f7 ff ff jb 5490 <vxlan_rcv+0xc0>
skb->csum_offset = offset - start;
5d2e: 29 f0 sub %esi,%eax
5d30: 4c 89 e7 mov %r12,%rdi
5d33: 89 c6 mov %eax,%esi
5d35: e8 00 00 00 00 callq 5d3a <vxlan_rcv+0x96a>
5d3a: 48 85 c0 test %rax,%rax
5d3d: 4c 8b 44 24 10 mov 0x10(%rsp),%r8
5d42: 44 8b 4c 24 20 mov 0x20(%rsp),%r9d
5d47: 8b 54 24 08 mov 0x8(%rsp),%edx
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
return 1;
if (unlikely(len > skb->len))
5d4b: 0f 84 3f f7 ff ff je 5490 <vxlan_rcv+0xc0>
5d51: e9 5d fa ff ff jmpq 57b3 <vxlan_rcv+0x3e3>
5d56: 31 d2 xor %edx,%edx
5d58: be 0e 00 00 00 mov $0xe,%esi
5d5d: e8 00 00 00 00 callq 5d62 <vxlan_rcv+0x992>
return 0;
return __pskb_pull_tail(skb, len - skb_headlen(skb)) != NULL;
5d62: 41 0f b7 b4 24 c6 00 movzwl 0xc6(%r12),%esi
5d69: 00 00
goto out;
start = vxlan_rco_start(unparsed->vx_vni);
offset = start + vxlan_rco_offset(unparsed->vx_vni);
if (!pskb_may_pull(skb, offset + sizeof(u16)))
5d6b: 49 8b 8c 24 d0 00 00 mov 0xd0(%r12),%rcx
5d72: 00
5d73: f7 d0 not %eax
5d75: 89 c2 mov %eax,%edx
5d77: 41 8b 84 24 98 00 00 mov 0x98(%r12),%eax
5d7e: 00
5d7f: 01 d0 add %edx,%eax
5d81: 83 d0 00 adc $0x0,%eax
5d84: 41 89 84 24 98 00 00 mov %eax,0x98(%r12)
5d8b: 00
static __always_inline void
__skb_postpull_rcsum(struct sk_buff *skb, const void *start, unsigned int len,
unsigned int off)
{
if (skb->ip_summed == CHECKSUM_COMPLETE)
skb->csum = csum_block_sub(skb->csum,
5d8c: 48 8d 3c 31 lea (%rcx,%rsi,1),%rdi
5d90: e9 e1 fb ff ff jmpq 5976 <vxlan_rcv+0x5a6>
5d95: e8 00 00 00 00 callq 5d9a <vxlan_rcv+0x9ca>
5d9a: 41 8b 94 24 c8 00 00 mov 0xc8(%r12),%edx
5da1: 00
5da2: 48 01 c8 add %rcx,%rax
5da5: 48 8d 70 14 lea 0x14(%rax),%rsi
5da9: 48 01 d1 add %rdx,%rcx
5dac: 48 39 ce cmp %rcx,%rsi
5daf: 0f 87 52 f8 ff ff ja 5607 <vxlan_rcv+0x237>
5db5: 0f b6 70 01 movzbl 0x1(%rax),%esi
5db9: 0f b7 78 0a movzwl 0xa(%rax),%edi
5dbd: 89 f1 mov %esi,%ecx
5dbf: 83 c6 01 add $0x1,%esi
5dc2: 89 f2 mov %esi,%edx
5dc4: 83 e2 03 and $0x3,%edx
drop:
/* Consume bad packet */
kfree_skb(skb);
return 0;
}
5dc7: 40 80 e6 02 and $0x2,%sil
static inline int INET_ECN_set_ce(struct sk_buff *skb)
{
switch (skb->protocol) {
case cpu_to_be16(ETH_P_IP):
if (skb_network_header(skb) + sizeof(struct iphdr) <=
5dcb: 0f 84 36 f8 ff ff je 5607 <vxlan_rcv+0x237>
5dd1: 66 c1 c2 08 rol $0x8,%dx
5dd5: 31 f6 xor %esi,%esi
5dd7: 0f b7 d2 movzwl %dx,%edx
5dda: 8d 94 17 ff fb 00 00 lea 0xfbff(%rdi,%rdx,1),%edx
5de1: 81 fa fe ff 00 00 cmp $0xfffe,%edx
} while (0)
static inline int IP_ECN_set_ce(struct iphdr *iph)
{
u32 check = (__force u32)iph->check;
u32 ecn = (iph->tos + 1) & INET_ECN_MASK;
5de7: 40 0f 97 c6 seta %sil
(label) |= htonl(INET_ECN_ECT_0 << 20); \
} while (0)
static inline int IP_ECN_set_ce(struct iphdr *iph)
{
u32 check = (__force u32)iph->check;
5deb: 83 c9 03 or $0x3,%ecx
u32 ecn = (iph->tos + 1) & INET_ECN_MASK;
5dee: 01 f2 add %esi,%edx
5df0: 88 48 01 mov %cl,0x1(%rax)
5df3: 66 89 50 0a mov %dx,0xa(%rax)
* INET_ECN_NOT_ECT => 01
* INET_ECN_ECT_1 => 10
* INET_ECN_ECT_0 => 11
* INET_ECN_CE => 00
*/
if (!(ecn & 2))
5df7: e9 0b f8 ff ff jmpq 5607 <vxlan_rcv+0x237>
5dfc: e8 00 00 00 00 callq 5e01 <vxlan_rcv+0xa31>
/*
* The following gives us:
* INET_ECN_ECT_1 => check += htons(0xFFFD)
* INET_ECN_ECT_0 => check += htons(0xFFFE)
*/
check += (__force u16)htons(0xFFFB) + (__force u16)htons(ecn);
5e01: 85 c0 test %eax,%eax
5e03: 0f 84 fe f7 ff ff je 5607 <vxlan_rcv+0x237>
5e09: 41 be 01 00 00 00 mov $0x1,%r14d
5e0f: 41 0f b6 55 01 movzbl 0x1(%r13),%edx
iph->check = (__force __sum16)(check + (check>=0xFFFF));
5e14: 49 8d 75 0c lea 0xc(%r13),%rsi
5e18: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
5e1f: e8 00 00 00 00 callq 5e24 <vxlan_rcv+0xa54>
5e24: e9 be fc ff ff jmpq 5ae7 <vxlan_rcv+0x717>
5e29: f0 48 ff 82 e8 01 00 lock incq 0x1e8(%rdx)
5e30: 00
err = IP6_ECN_decapsulate(oiph, skb);
#endif
if (unlikely(err) && log_ecn_error) {
if (vxlan_get_sk_family(vs) == AF_INET)
net_info_ratelimited("non-ECT from %pI4 with TOS=%#x\n",
5e31: 4c 89 e7 mov %r12,%rdi
5e34: e8 00 00 00 00 callq 5e39 <vxlan_rcv+0xa69>
5e39: e9 5a f6 ff ff jmpq 5498 <vxlan_rcv+0xc8>
5e3e: 4c 89 e7 mov %r12,%rdi
5e41: 4c 89 54 24 08 mov %r10,0x8(%rsp)
5e46: 44 89 4c 24 20 mov %r9d,0x20(%rsp)
5e4b: 4c 89 44 24 10 mov %r8,0x10(%rsp)
5e50: e8 00 00 00 00 callq 5e55 <vxlan_rcv+0xa85>
5e55: 41 0f b6 84 24 91 00 movzbl 0x91(%r12),%eax
5e5c: 00 00
*
* Atomically increments @v by 1.
*/
static __always_inline void atomic64_inc(atomic64_t *v)
{
asm volatile(LOCK_PREFIX "incq %0"
5e5e: 4c 8b 54 24 08 mov 0x8(%rsp),%r10
cell = this_cpu_ptr(gcells->cells);
if (skb_queue_len(&cell->napi_skbs) > netdev_max_backlog) {
atomic_long_inc(&dev->rx_dropped);
kfree_skb(skb);
5e63: 49 8b bc 24 d8 00 00 mov 0xd8(%r12),%rdi
5e6a: 00
5e6b: 4c 8b 44 24 10 mov 0x10(%rsp),%r8
skb_remcsum_adjust_partial(skb, ptr, start, offset);
return;
}
if (unlikely(skb->ip_summed != CHECKSUM_COMPLETE)) {
__skb_checksum_complete(skb);
5e70: 44 8b 4c 24 20 mov 0x20(%rsp),%r9d
5e75: 4c 89 d6 mov %r10,%rsi
5e78: 89 c2 mov %eax,%edx
5e7a: 48 29 fe sub %rdi,%rsi
5e7d: 83 e2 06 and $0x6,%edx
5e80: 80 fa 04 cmp $0x4,%dl
5e83: 74 3d je 5ec2 <vxlan_rcv+0xaf2>
static __always_inline void
__skb_postpull_rcsum(struct sk_buff *skb, const void *start, unsigned int len,
unsigned int off)
{
if (skb->ip_summed == CHECKSUM_COMPLETE)
5e85: 80 fa 06 cmp $0x6,%dl
5e88: 0f 85 58 f9 ff ff jne 57e6 <vxlan_rcv+0x416>
return;
}
if (unlikely(skb->ip_summed != CHECKSUM_COMPLETE)) {
__skb_checksum_complete(skb);
skb_postpull_rcsum(skb, skb->data, ptr - (void *)skb->data);
5e8e: 41 0f b7 94 24 98 00 movzwl 0x98(%r12),%edx
5e95: 00 00
5e97: 49 2b bc 24 d0 00 00 sub 0xd0(%r12),%rdi
5e9e: 00
static __always_inline void
__skb_postpull_rcsum(struct sk_buff *skb, const void *start, unsigned int len,
unsigned int off)
{
if (skb->ip_summed == CHECKSUM_COMPLETE)
5e9f: 39 fa cmp %edi,%edx
5ea1: 0f 89 3f f9 ff ff jns 57e6 <vxlan_rcv+0x416>
return;
}
if (unlikely(skb->ip_summed != CHECKSUM_COMPLETE)) {
__skb_checksum_complete(skb);
skb_postpull_rcsum(skb, skb->data, ptr - (void *)skb->data);
5ea7: 83 e0 f9 and $0xfffffff9,%eax
5eaa: 41 88 84 24 91 00 00 mov %al,0x91(%r12)
5eb1: 00
static __always_inline void
__skb_postpull_rcsum(struct sk_buff *skb, const void *start, unsigned int len,
unsigned int off)
{
if (skb->ip_summed == CHECKSUM_COMPLETE)
5eb2: e9 2f f9 ff ff jmpq 57e6 <vxlan_rcv+0x416>
skb->csum = csum_block_sub(skb->csum,
csum_partial(start, len, 0), off);
else if (skb->ip_summed == CHECKSUM_PARTIAL &&
5eb7: 41 be 02 00 00 00 mov $0x2,%r14d
5ebd: e9 4d ff ff ff jmpq 5e0f <vxlan_rcv+0xa3f>
5ec2: 31 d2 xor %edx,%edx
5ec4: e8 00 00 00 00 callq 5ec9 <vxlan_rcv+0xaf9>
5ec9: 45 8b 9c 24 98 00 00 mov 0x98(%r12),%r11d
5ed0: 00
5ed1: f7 d0 not %eax
5ed3: 4c 8b 44 24 10 mov 0x10(%rsp),%r8
skb_checksum_start_offset(skb) < 0)
skb->ip_summed = CHECKSUM_NONE;
5ed8: 41 01 c3 add %eax,%r11d
5edb: 41 83 d3 00 adc $0x0,%r11d
5edf: 44 8b 4c 24 20 mov 0x20(%rsp),%r9d
5ee4: 45 89 9c 24 98 00 00 mov %r11d,0x98(%r12)
5eeb: 00
return 0;
case INET_ECN_ECT_0:
case INET_ECN_ECT_1:
return 1;
case INET_ECN_CE:
return 2;
5eec: 4c 8b 54 24 08 mov 0x8(%rsp),%r10
5ef1: e9 f8 f8 ff ff jmpq 57ee <vxlan_rcv+0x41e>
static __always_inline void
__skb_postpull_rcsum(struct sk_buff *skb, const void *start, unsigned int len,
unsigned int off)
{
if (skb->ip_summed == CHECKSUM_COMPLETE)
skb->csum = csum_block_sub(skb->csum,
5ef6: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
5efd: 00 00 00
0000000000005f00 <vxlan_xmit>:
5f00: e8 00 00 00 00 callq 5f05 <vxlan_xmit+0x5>
5f05: 55 push %rbp
5f06: 48 89 e5 mov %rsp,%rbp
5f09: 41 57 push %r15
5f0b: 41 56 push %r14
5f0d: 41 55 push %r13
5f0f: 41 54 push %r12
5f11: 49 89 f6 mov %rsi,%r14
5f14: 53 push %rbx
5f15: 48 89 fb mov %rdi,%rbx
5f18: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
5f1c: 48 83 ec 60 sub $0x60,%rsp
5f20: 65 48 8b 04 25 28 00 mov %gs:0x28,%rax
5f27: 00 00
5f29: 48 89 44 24 58 mov %rax,0x58(%rsp)
5f2e: 31 c0 xor %eax,%eax
* Outer IP header inherits ECN and DF from inner header.
* Outer UDP destination is the VXLAN assigned port.
* source port is based on hash of flow
*/
static netdev_tx_t vxlan_xmit(struct sk_buff *skb, struct net_device *dev)
{
5f30: 48 8b 47 58 mov 0x58(%rdi),%rax
5f34: 48 83 e0 fe and $0xfffffffffffffffe,%rax
5f38: 0f 84 fc 04 00 00 je 643a <vxlan_xmit+0x53a>
5f3e: f6 40 61 02 testb $0x2,0x61(%rax)
5f42: 0f 84 52 01 00 00 je 609a <vxlan_xmit+0x19a>
5f48: 48 05 a0 00 00 00 add $0xa0,%rax
5f4e: 48 8b 8b d0 00 00 00 mov 0xd0(%rbx),%rcx
5f55: 48 8b 93 d8 00 00 00 mov 0xd8(%rbx),%rdx
5f5c: 48 29 ca sub %rcx,%rdx
5f5f: 66 89 93 c6 00 00 00 mov %dx,0xc6(%rbx)
static inline struct metadata_dst *skb_metadata_dst(struct sk_buff *skb)
{
struct metadata_dst *md_dst = (struct metadata_dst *) skb_dst(skb);
if (md_dst && md_dst->dst.flags & DST_METADATA)
5f66: 41 8b b6 d8 08 00 00 mov 0x8d8(%r14),%esi
5f6d: f7 c6 00 20 00 00 test $0x2000,%esi
5f73: 0f 85 e6 00 00 00 jne 605f <vxlan_xmit+0x15f>
{
struct metadata_dst *md_dst = skb_metadata_dst(skb);
struct dst_entry *dst;
if (md_dst)
return &md_dst->u.tun_info;
5f79: 83 e6 02 and $0x2,%esi
5f7c: 4d 8d ae 40 08 00 00 lea 0x840(%r14),%r13
return skb->mac_header != (typeof(skb->mac_header))~0U;
}
static inline void skb_reset_mac_header(struct sk_buff *skb)
{
skb->mac_header = skb->data - skb->head;
5f83: 74 20 je 5fa5 <vxlan_xmit+0xa5>
5f85: 0f b7 d2 movzwl %dx,%edx
5f88: 0f b7 44 11 0c movzwl 0xc(%rcx,%rdx,1),%eax
5f8d: 66 c1 c0 08 rol $0x8,%ax
5f91: 66 3d 06 08 cmp $0x806,%ax
5f95: 0f 84 cf 04 00 00 je 646a <vxlan_xmit+0x56a>
info = skb_tunnel_info(skb);
skb_reset_mac_header(skb);
if (vxlan->flags & VXLAN_F_COLLECT_METADATA) {
5f9b: 66 3d dd 86 cmp $0x86dd,%ax
5f9f: 0f 84 77 02 00 00 je 621c <vxlan_xmit+0x31c>
5fa5: 44 0f b7 bb c6 00 00 movzwl 0xc6(%rbx),%r15d
5fac: 00
*
* Get network device private data
*/
static inline void *netdev_priv(const struct net_device *dev)
{
return (char *)dev + ALIGN(sizeof(struct net_device), NETDEV_ALIGN);
5fad: 4c 89 ef mov %r13,%rdi
5fb0: 49 01 cf add %rcx,%r15
else
kfree_skb(skb);
return NETDEV_TX_OK;
}
if (vxlan->flags & VXLAN_F_PROXY) {
5fb3: 4c 89 fe mov %r15,%rsi
eth = eth_hdr(skb);
if (ntohs(eth->h_proto) == ETH_P_ARP)
5fb6: e8 45 a0 ff ff callq 0 <__vxlan_find_mac>
5fbb: 48 85 c0 test %rax,%rax
5fbe: 49 89 c4 mov %rax,%r12
5fc1: 0f 84 24 02 00 00 je 61eb <vxlan_xmit+0x2eb>
5fc7: 41 80 7c 24 48 00 cmpb $0x0,0x48(%r12)
return arp_reduce(dev, skb);
#if IS_ENABLED(CONFIG_IPV6)
else if (ntohs(eth->h_proto) == ETH_P_IPV6 &&
5fcd: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 5fd4 <vxlan_xmit+0xd4>
5fd4: c6 44 24 18 00 movb $0x0,0x18(%rsp)
skb->network_header += offset;
}
static inline unsigned char *skb_mac_header(const struct sk_buff *skb)
{
return skb->head + skb->mac_header;
5fd9: 49 89 44 24 28 mov %rax,0x28(%r12)
static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
5fde: 0f 88 e0 00 00 00 js 60c4 <vxlan_xmit+0x1c4>
5fe4: 49 8b 44 24 30 mov 0x30(%r12),%rax
5fe9: 49 83 c4 30 add $0x30,%r12
if (f)
5fed: 49 39 c4 cmp %rax,%r12
static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
5ff0: 4c 8d 78 d8 lea -0x28(%rax),%r15
if (f)
5ff4: 74 6f je 6065 <vxlan_xmit+0x165>
5ff6: 0f b6 44 24 18 movzbl 0x18(%rsp),%eax
eth = eth_hdr(skb);
f = vxlan_find_mac(vxlan, eth->h_dest);
did_rsc = false;
if (f && (f->flags & NTF_ROUTER) && (vxlan->flags & VXLAN_F_RSC) &&
5ffb: 4d 89 fd mov %r15,%r13
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
if (f)
f->used = jiffies;
5ffe: 89 44 24 20 mov %eax,0x20(%rsp)
6002: 49 8b 47 28 mov 0x28(%r15),%rax
#endif
}
eth = eth_hdr(skb);
f = vxlan_find_mac(vxlan, eth->h_dest);
did_rsc = false;
6006: 49 39 c4 cmp %rax,%r12
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
if (f)
f->used = jiffies;
6009: 4c 8d 78 d8 lea -0x28(%rax),%r15
600d: 74 36 je 6045 <vxlan_xmit+0x145>
eth = eth_hdr(skb);
f = vxlan_find_mac(vxlan, eth->h_dest);
did_rsc = false;
if (f && (f->flags & NTF_ROUTER) && (vxlan->flags & VXLAN_F_RSC) &&
600f: 4d 85 ed test %r13,%r13
6012: 74 7e je 6092 <vxlan_xmit+0x192>
6014: be 20 00 08 02 mov $0x2080020,%esi
kfree_skb(skb);
return NETDEV_TX_OK;
}
}
list_for_each_entry_rcu(rdst, &f->remotes, list) {
6019: 48 89 df mov %rbx,%rdi
601c: e8 00 00 00 00 callq 6021 <vxlan_xmit+0x121>
6021: 48 85 c0 test %rax,%rax
6024: 74 dc je 6002 <vxlan_xmit+0x102>
fdst = rdst;
continue;
}
skb1 = skb_clone(skb, GFP_ATOMIC);
if (skb1)
vxlan_xmit_one(skb1, dev, rdst, did_rsc);
6026: 8b 4c 24 20 mov 0x20(%rsp),%ecx
602a: 4c 89 fa mov %r15,%rdx
kfree_skb(skb);
return NETDEV_TX_OK;
}
}
list_for_each_entry_rcu(rdst, &f->remotes, list) {
602d: 48 89 c7 mov %rax,%rdi
fdst = rdst;
continue;
}
skb1 = skb_clone(skb, GFP_ATOMIC);
if (skb1)
vxlan_xmit_one(skb1, dev, rdst, did_rsc);
6030: 4c 89 f6 mov %r14,%rsi
6033: e8 e8 e1 ff ff callq 4220 <vxlan_xmit_one>
kfree_skb(skb);
return NETDEV_TX_OK;
}
}
list_for_each_entry_rcu(rdst, &f->remotes, list) {
6038: 49 8b 47 28 mov 0x28(%r15),%rax
603c: 49 39 c4 cmp %rax,%r12
struct sk_buff *skb1;
if (!fdst) {
603f: 4c 8d 78 d8 lea -0x28(%rax),%r15
6043: 75 ca jne 600f <vxlan_xmit+0x10f>
fdst = rdst;
continue;
}
skb1 = skb_clone(skb, GFP_ATOMIC);
6045: 4d 85 ed test %r13,%r13
6048: 74 1b je 6065 <vxlan_xmit+0x165>
604a: 0f b6 4c 24 18 movzbl 0x18(%rsp),%ecx
604f: 4c 89 ea mov %r13,%rdx
if (skb1)
6052: 4c 89 f6 mov %r14,%rsi
6055: 48 89 df mov %rbx,%rdi
vxlan_xmit_one(skb1, dev, rdst, did_rsc);
6058: e8 c3 e1 ff ff callq 4220 <vxlan_xmit_one>
605d: eb 0e jmp 606d <vxlan_xmit+0x16d>
605f: f6 40 49 01 testb $0x1,0x49(%rax)
6063: 75 4e jne 60b3 <vxlan_xmit+0x1b3>
6065: 48 89 df mov %rbx,%rdi
6068: e8 00 00 00 00 callq 606d <vxlan_xmit+0x16d>
kfree_skb(skb);
return NETDEV_TX_OK;
}
}
list_for_each_entry_rcu(rdst, &f->remotes, list) {
606d: 31 c0 xor %eax,%eax
606f: 48 8b 4c 24 58 mov 0x58(%rsp),%rcx
6074: 65 48 33 0c 25 28 00 xor %gs:0x28,%rcx
607b: 00 00
if (skb1)
vxlan_xmit_one(skb1, dev, rdst, did_rsc);
}
if (fdst)
vxlan_xmit_one(skb, dev, fdst, did_rsc);
607d: 0f 85 52 08 00 00 jne 68d5 <vxlan_xmit+0x9d5>
6083: 48 8d 65 d8 lea -0x28(%rbp),%rsp
6087: 5b pop %rbx
6088: 41 5c pop %r12
608a: 41 5d pop %r13
608c: 41 5e pop %r14
608e: 41 5f pop %r15
info = skb_tunnel_info(skb);
skb_reset_mac_header(skb);
if (vxlan->flags & VXLAN_F_COLLECT_METADATA) {
if (info && info->mode & IP_TUNNEL_INFO_TX)
6090: 5d pop %rbp
6091: c3 retq
6092: 4d 89 fd mov %r15,%r13
if ((vxlan->flags & VXLAN_F_L2MISS) &&
!is_multicast_ether_addr(eth->h_dest))
vxlan_fdb_miss(vxlan, eth->h_dest);
dev->stats.tx_dropped++;
kfree_skb(skb);
6095: e9 68 ff ff ff jmpq 6002 <vxlan_xmit+0x102>
609a: 48 8b 80 90 00 00 00 mov 0x90(%rax),%rax
if (fdst)
vxlan_xmit_one(skb, dev, fdst, did_rsc);
else
kfree_skb(skb);
return NETDEV_TX_OK;
}
60a1: 48 85 c0 test %rax,%rax
60a4: 0f 84 90 03 00 00 je 643a <vxlan_xmit+0x53a>
60aa: 48 83 c0 1c add $0x1c,%rax
60ae: e9 9b fe ff ff jmpq 5f4e <vxlan_xmit+0x4e>
60b3: 31 c9 xor %ecx,%ecx
60b5: 31 d2 xor %edx,%edx
60b7: 4c 89 f6 mov %r14,%rsi
60ba: 48 89 df mov %rbx,%rdi
60bd: e8 5e e1 ff ff callq 4220 <vxlan_xmit_one>
kfree_skb(skb);
return NETDEV_TX_OK;
}
}
list_for_each_entry_rcu(rdst, &f->remotes, list) {
60c2: eb a9 jmp 606d <vxlan_xmit+0x16d>
60c4: 41 f6 86 d8 08 00 00 testb $0x4,0x8d8(%r14)
60cb: 04
dst = skb_dst(skb);
if (dst && dst->lwtstate)
60cc: 0f 84 12 ff ff ff je 5fe4 <vxlan_xmit+0xe4>
60d2: 41 0f b7 47 0c movzwl 0xc(%r15),%eax
60d7: 66 c1 c0 08 rol $0x8,%ax
info->options_len = len;
}
static inline struct ip_tunnel_info *lwt_tun_info(struct lwtunnel_state *lwtstate)
{
return (struct ip_tunnel_info *)lwtstate->data;
60db: 66 3d 00 08 cmp $0x800,%ax
60df: 74 0a je 60eb <vxlan_xmit+0x1eb>
60e1: 66 3d dd 86 cmp $0x86dd,%ax
skb_reset_mac_header(skb);
if (vxlan->flags & VXLAN_F_COLLECT_METADATA) {
if (info && info->mode & IP_TUNNEL_INFO_TX)
vxlan_xmit_one(skb, dev, NULL, false);
60e5: 0f 85 f9 fe ff ff jne 5fe4 <vxlan_xmit+0xe4>
60eb: 0f b7 83 c6 00 00 00 movzwl 0xc6(%rbx),%eax
60f2: 48 8b 93 d0 00 00 00 mov 0xd0(%rbx),%rdx
eth = eth_hdr(skb);
f = vxlan_find_mac(vxlan, eth->h_dest);
did_rsc = false;
if (f && (f->flags & NTF_ROUTER) && (vxlan->flags & VXLAN_F_RSC) &&
60f9: 48 01 d0 add %rdx,%rax
60fc: f6 00 01 testb $0x1,(%rax)
60ff: 0f 85 f2 05 00 00 jne 66f7 <vxlan_xmit+0x7f7>
(ntohs(eth->h_proto) == ETH_P_IP ||
6105: 0f b7 40 0c movzwl 0xc(%rax),%eax
6109: 66 c1 c0 08 rol $0x8,%ax
eth = eth_hdr(skb);
f = vxlan_find_mac(vxlan, eth->h_dest);
did_rsc = false;
if (f && (f->flags & NTF_ROUTER) && (vxlan->flags & VXLAN_F_RSC) &&
610d: 66 3d 00 08 cmp $0x800,%ax
6111: 0f 84 66 05 00 00 je 667d <vxlan_xmit+0x77d>
6117: 66 3d dd 86 cmp $0x86dd,%ax
611b: 0f 85 d6 05 00 00 jne 66f7 <vxlan_xmit+0x7f7>
6121: 8b 8b 80 00 00 00 mov 0x80(%rbx),%ecx
6127: 89 c8 mov %ecx,%eax
6129: 2b 83 84 00 00 00 sub 0x84(%rbx),%eax
static bool route_shortcircuit(struct net_device *dev, struct sk_buff *skb)
{
struct vxlan_dev *vxlan = netdev_priv(dev);
struct neighbour *n;
if (is_multicast_ether_addr(eth_hdr(skb)->h_dest))
612f: 83 f8 27 cmp $0x27,%eax
6132: 0f 86 39 07 00 00 jbe 6871 <vxlan_xmit+0x971>
return false;
n = NULL;
switch (ntohs(eth_hdr(skb)->h_proto)) {
6138: 44 0f b7 83 c4 00 00 movzwl 0xc4(%rbx),%r8d
613f: 00
6140: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 6147 <vxlan_xmit+0x247>
6147: 48 8b 78 28 mov 0x28(%rax),%rdi
614b: 49 01 d0 add %rdx,%r8
614e: 4c 89 f2 mov %r14,%rdx
6151: 49 8d 70 18 lea 0x18(%r8),%rsi
6155: 4c 89 44 24 20 mov %r8,0x20(%rsp)
return skb->data_len;
}
static inline unsigned int skb_headlen(const struct sk_buff *skb)
{
return skb->len - skb->data_len;
615a: e8 00 00 00 00 callq 615f <vxlan_xmit+0x25f>
return unlikely(len > skb->len) ? NULL : __pskb_pull(skb, len);
}
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
615f: 48 85 c0 test %rax,%rax
6162: 4c 8b 44 24 20 mov 0x20(%rsp),%r8
6167: 0f 84 8a 0a 00 00 je 6bf7 <vxlan_xmit+0xcf7>
skb->transport_header += offset;
}
static inline unsigned char *skb_network_header(const struct sk_buff *skb)
{
return skb->head + skb->network_header;
616d: 0f b7 bb c6 00 00 00 movzwl 0xc6(%rbx),%edi
struct ipv6hdr *pip6;
if (!pskb_may_pull(skb, sizeof(struct ipv6hdr)))
return false;
pip6 = ipv6_hdr(skb);
n = neigh_lookup(ipv6_stub->nd_tbl, &pip6->daddr, dev);
6174: 48 8b 8b d0 00 00 00 mov 0xd0(%rbx),%rcx
617b: 44 8b 80 b8 00 00 00 mov 0xb8(%rax),%r8d
6182: 48 8d 34 39 lea (%rcx,%rdi,1),%rsi
6186: 0f b7 56 04 movzwl 0x4(%rsi),%edx
618a: 44 8b 0e mov (%rsi),%r9d
618d: 66 33 90 bc 00 00 00 xor 0xbc(%rax),%dx
if (!n && (vxlan->flags & VXLAN_F_L3MISS)) {
6194: 45 31 c8 xor %r9d,%r8d
6197: 0f b7 d2 movzwl %dx,%edx
619a: 41 09 d0 or %edx,%r8d
skb->network_header += offset;
}
static inline unsigned char *skb_mac_header(const struct sk_buff *skb)
{
return skb->head + skb->mac_header;
619d: 44 89 44 24 20 mov %r8d,0x20(%rsp)
61a2: 0f 85 e1 05 00 00 jne 6789 <vxlan_xmit+0x889>
61a8: f0 ff 48 30 lock decl 0x30(%rax)
* Please note: addr1 & addr2 must both be aligned to u16.
*/
static inline bool ether_addr_equal(const u8 *addr1, const u8 *addr2)
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
u32 fold = ((*(const u32 *)addr1) ^ (*(const u32 *)addr2)) |
61ac: 0f 84 ca 05 00 00 je 677c <vxlan_xmit+0x87c>
61b2: 8b 44 24 20 mov 0x20(%rsp),%eax
61b6: 85 c0 test %eax,%eax
61b8: 0f 84 39 05 00 00 je 66f7 <vxlan_xmit+0x7f7>
61be: 4c 89 fe mov %r15,%rsi
61c1: 4c 89 ef mov %r13,%rdi
61c4: e8 37 9e ff ff callq 0 <__vxlan_find_mac>
61c9: 48 85 c0 test %rax,%rax
if (n) {
bool diff;
diff = !ether_addr_equal(eth_hdr(skb)->h_dest, n->ha);
if (diff) {
61cc: 49 89 c4 mov %rax,%r12
61cf: 0f 84 f6 06 00 00 je 68cb <vxlan_xmit+0x9cb>
61d5: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 61dc <vxlan_xmit+0x2dc>
* returns true if the result is 0, or false for all other
* cases.
*/
static __always_inline bool atomic_dec_and_test(atomic_t *v)
{
GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e);
61dc: c6 44 24 18 01 movb $0x1,0x18(%rsp)
61e1: 49 89 44 24 28 mov %rax,0x28(%r12)
if (f && (f->flags & NTF_ROUTER) && (vxlan->flags & VXLAN_F_RSC) &&
(ntohs(eth->h_proto) == ETH_P_IP ||
ntohs(eth->h_proto) == ETH_P_IPV6)) {
did_rsc = route_shortcircuit(dev, skb);
if (did_rsc)
61e6: e9 f9 fd ff ff jmpq 5fe4 <vxlan_xmit+0xe4>
61eb: c6 44 24 18 00 movb $0x0,0x18(%rsp)
static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
61f0: 48 c7 c6 00 00 00 00 mov $0x0,%rsi
61f7: 4c 89 ef mov %r13,%rdi
if (f)
61fa: e8 01 9e ff ff callq 0 <__vxlan_find_mac>
61ff: 48 85 c0 test %rax,%rax
6202: 49 89 c4 mov %rax,%r12
f->used = jiffies;
6205: 0f 84 f6 04 00 00 je 6701 <vxlan_xmit+0x801>
620b: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 6212 <vxlan_xmit+0x312>
6212: 49 89 44 24 28 mov %rax,0x28(%r12)
6217: e9 c8 fd ff ff jmpq 5fe4 <vxlan_xmit+0xe4>
#endif
}
eth = eth_hdr(skb);
f = vxlan_find_mac(vxlan, eth->h_dest);
did_rsc = false;
621c: 8b 93 80 00 00 00 mov 0x80(%rbx),%edx
static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
6222: 89 d0 mov %edx,%eax
6224: 2b 83 84 00 00 00 sub 0x84(%rbx),%eax
622a: 83 f8 3f cmp $0x3f,%eax
622d: 0f 86 f6 04 00 00 jbe 6729 <vxlan_xmit+0x829>
6233: 0f b7 83 c4 00 00 00 movzwl 0xc4(%rbx),%eax
if (f)
623a: 48 01 c8 add %rcx,%rax
f->used = jiffies;
623d: 80 78 06 3a cmpb $0x3a,0x6(%rax)
6241: 0f 85 5e fd ff ff jne 5fa5 <vxlan_xmit+0xa5>
6247: 44 0f b7 a3 c2 00 00 movzwl 0xc2(%rbx),%r12d
624e: 00
624f: 49 01 cc add %rcx,%r12
return skb->data_len;
}
static inline unsigned int skb_headlen(const struct sk_buff *skb)
{
return skb->len - skb->data_len;
6252: 66 41 81 3c 24 87 00 cmpw $0x87,(%r12)
6259: 0f 85 46 fd ff ff jne 5fa5 <vxlan_xmit+0xa5>
return unlikely(len > skb->len) ? NULL : __pskb_pull(skb, len);
}
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
625f: 49 8b 96 08 03 00 00 mov 0x308(%r14),%rdx
skb->transport_header += offset;
}
static inline unsigned char *skb_network_header(const struct sk_buff *skb)
{
return skb->head + skb->network_header;
6266: 48 85 d2 test %rdx,%rdx
6269: 0f 84 f8 03 00 00 je 6667 <vxlan_xmit+0x767>
if (ntohs(eth->h_proto) == ETH_P_ARP)
return arp_reduce(dev, skb);
#if IS_ENABLED(CONFIG_IPV6)
else if (ntohs(eth->h_proto) == ETH_P_IPV6 &&
pskb_may_pull(skb, sizeof(struct ipv6hdr)
+ sizeof(struct nd_msg)) &&
626f: 48 ba 00 00 00 00 00 movabs $0x100000000000000,%rdx
6276: 00 00 01
return skb->transport_header != (typeof(skb->transport_header))~0U;
}
static inline unsigned char *skb_transport_header(const struct sk_buff *skb)
{
return skb->head + skb->transport_header;
6279: 48 33 50 20 xor 0x20(%rax),%rdx
627d: 48 0b 50 18 or 0x18(%rax),%rdx
6281: 0f 84 e0 03 00 00 je 6667 <vxlan_xmit+0x767>
ipv6_hdr(skb)->nexthdr == IPPROTO_ICMPV6) {
struct nd_msg *msg;
msg = (struct nd_msg *)skb_transport_header(skb);
if (msg->icmph.icmp6_code == 0 &&
6287: 41 0f b6 44 24 08 movzbl 0x8(%r12),%eax
628d: 3d ff 00 00 00 cmp $0xff,%eax
6292: 0f 84 cf 03 00 00 je 6667 <vxlan_xmit+0x767>
const struct in6_addr *saddr, *daddr;
struct neighbour *n;
struct inet6_dev *in6_dev;
in6_dev = __in6_dev_get(dev);
if (!in6_dev)
6298: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 629f <vxlan_xmit+0x39f>
msg = (struct nd_msg *)skb_transport_header(skb);
if (msg->icmph.icmp6_code != 0 ||
msg->icmph.icmp6_type != NDISC_NEIGHBOUR_SOLICITATION)
goto out;
if (ipv6_addr_loopback(daddr) ||
629f: 49 8d 74 24 08 lea 0x8(%r12),%rsi
62a4: 4c 89 f2 mov %r14,%rdx
62a7: 48 8b 78 28 mov 0x28(%rax),%rdi
62ab: e8 00 00 00 00 callq 62b0 <vxlan_xmit+0x3b0>
62b0: 48 85 c0 test %rax,%rax
62b3: 49 89 c7 mov %rax,%r15
62b6: 0f 84 3b 06 00 00 je 68f7 <vxlan_xmit+0x9f7>
62bc: f6 80 ad 00 00 00 c2 testb $0xc2,0xad(%rax)
62c3: 0f 84 21 06 00 00 je 68ea <vxlan_xmit+0x9ea>
ipv6_addr_is_multicast(&msg->target))
goto out;
n = neigh_lookup(ipv6_stub->nd_tbl, &msg->target, dev);
62c9: 48 8d b0 b8 00 00 00 lea 0xb8(%rax),%rsi
62d0: 4c 89 ef mov %r13,%rdi
62d3: e8 28 9d ff ff callq 0 <__vxlan_find_mac>
62d8: 48 85 c0 test %rax,%rax
62db: 0f 84 e7 08 00 00 je 6bc8 <vxlan_xmit+0xcc8>
if (n) {
62e1: 48 8b 15 00 00 00 00 mov 0x0(%rip),%rdx # 62e8 <vxlan_xmit+0x3e8>
62e8: 48 89 50 28 mov %rdx,0x28(%rax)
struct vxlan_fdb *f;
struct sk_buff *reply;
if (!(n->nud_state & NUD_CONNECTED)) {
62ec: 48 8b 50 30 mov 0x30(%rax),%rdx
62f0: 66 83 7a d8 0a cmpw $0xa,-0x28(%rdx)
62f5: 0f 84 bd 08 00 00 je 6bb8 <vxlan_xmit+0xcb8>
neigh_release(n);
goto out;
}
f = vxlan_find_mac(vxlan, n->ha);
62fb: 83 7a dc 00 cmpl $0x0,-0x24(%rdx)
62ff: 0f 94 c2 sete %dl
static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
6302: 84 d2 test %dl,%dl
6304: 0f 85 e0 05 00 00 jne 68ea <vxlan_xmit+0x9ea>
if (f)
630a: 0f be 40 48 movsbl 0x48(%rax),%eax
630e: c1 e8 1f shr $0x1f,%eax
f->used = jiffies;
6311: 4c 8b 6b 20 mov 0x20(%rbx),%r13
6315: 83 e0 01 and $0x1,%eax
6318: 88 44 24 20 mov %al,0x20(%rsp)
631c: 4d 85 ed test %r13,%r13
631f: 0f 84 c5 05 00 00 je 68ea <vxlan_xmit+0x9ea>
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
static inline bool vxlan_addr_any(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
6325: 41 0f b7 95 4e 02 00 movzwl 0x24e(%r13),%edx
632c: 00
return ipv6_addr_any(&ipa->sin6.sin6_addr);
else
return ipa->sin.sin_addr.s_addr == htonl(INADDR_ANY);
632d: 41 0f b7 85 50 02 00 movzwl 0x250(%r13),%eax
6334: 00
neigh_release(n);
goto out;
}
f = vxlan_find_mac(vxlan, n->ha);
if (f && vxlan_addr_any(&(first_remote_rcu(f)->remote_ip))) {
6335: 83 c9 ff or $0xffffffff,%ecx
6338: be 20 00 08 02 mov $0x2080020,%esi
neigh_release(n);
goto out;
}
reply = vxlan_na_create(skb, n,
!!(f ? f->flags & NTF_ROUTER : 0));
633d: 01 d0 add %edx,%eax
633f: 41 0f b7 95 52 02 00 movzwl 0x252(%r13),%edx
6346: 00
#if IS_ENABLED(CONFIG_IPV6)
static struct sk_buff *vxlan_na_create(struct sk_buff *request,
struct neighbour *n, bool isrouter)
{
struct net_device *dev = request->dev;
6347: 83 e0 f0 and $0xfffffff0,%eax
634a: 8d 7c 10 58 lea 0x58(%rax,%rdx,1),%edi
u8 *daddr;
int na_olen = 8; /* opt hdr + ETH_ALEN for target */
int ns_olen;
int i, len;
if (dev == NULL)
634e: 31 d2 xor %edx,%edx
6350: e8 00 00 00 00 callq 6355 <vxlan_xmit+0x455>
struct sk_buff *__build_skb(void *data, unsigned int frag_size);
struct sk_buff *build_skb(void *data, unsigned int frag_size);
static inline struct sk_buff *alloc_skb(unsigned int size,
gfp_t priority)
{
return __alloc_skb(size, priority, 0, NUMA_NO_NODE);
6355: 48 85 c0 test %rax,%rax
6358: 49 89 c4 mov %rax,%r12
635b: 0f 84 89 05 00 00 je 68ea <vxlan_xmit+0x9ea>
6361: 66 c7 80 c0 00 00 00 movw $0xdd86,0xc0(%rax)
6368: 86 dd
636a: 4c 89 68 20 mov %r13,0x20(%rax)
636e: be 0e 00 00 00 mov $0xe,%esi
6373: 48 8b 53 20 mov 0x20(%rbx),%rdx
6377: 4c 89 e7 mov %r12,%rdi
637a: 0f b7 8a 4e 02 00 00 movzwl 0x24e(%rdx),%ecx
6381: 0f b7 82 50 02 00 00 movzwl 0x250(%rdx),%eax
6388: 01 c8 add %ecx,%eax
638a: 83 e0 f0 and $0xfffffff0,%eax
return NULL;
len = LL_RESERVED_SPACE(dev) + sizeof(struct ipv6hdr) +
sizeof(*na) + na_olen + dev->needed_tailroom;
reply = alloc_skb(len, GFP_ATOMIC);
if (reply == NULL)
638d: 83 c0 10 add $0x10,%eax
6390: 41 01 84 24 c8 00 00 add %eax,0xc8(%r12)
6397: 00
return NULL;
reply->protocol = htons(ETH_P_IPV6);
6398: 48 63 d0 movslq %eax,%rdx
reply->dev = dev;
639b: 49 01 94 24 d8 00 00 add %rdx,0xd8(%r12)
63a2: 00
skb_reserve(reply, LL_RESERVED_SPACE(request->dev));
63a3: e8 00 00 00 00 callq 63a8 <vxlan_xmit+0x4a8>
skb_push(reply, sizeof(struct ethhdr));
63a8: 49 8b 84 24 d0 00 00 mov 0xd0(%r12),%rax
63af: 00
if (reply == NULL)
return NULL;
reply->protocol = htons(ETH_P_IPV6);
reply->dev = dev;
skb_reserve(reply, LL_RESERVED_SPACE(request->dev));
63b0: 49 8b 8c 24 d8 00 00 mov 0xd8(%r12),%rcx
63b7: 00
63b8: 48 29 c1 sub %rax,%rcx
63bb: 66 41 89 8c 24 c6 00 mov %cx,0xc6(%r12)
63c2: 00 00
* room. This is only allowed for an empty buffer.
*/
static inline void skb_reserve(struct sk_buff *skb, int len)
{
skb->data += len;
skb->tail += len;
63c4: 0f b7 93 c2 00 00 00 movzwl 0xc2(%rbx),%edx
* Increase the headroom of an empty &sk_buff by reducing the tail
* room. This is only allowed for an empty buffer.
*/
static inline void skb_reserve(struct sk_buff *skb, int len)
{
skb->data += len;
63cb: 48 8b b3 d0 00 00 00 mov 0xd0(%rbx),%rsi
63d2: 0f b7 bb c6 00 00 00 movzwl 0xc6(%rbx),%edi
return skb->mac_header != (typeof(skb->mac_header))~0U;
}
static inline void skb_reset_mac_header(struct sk_buff *skb)
{
skb->mac_header = skb->data - skb->head;
63d9: 44 8b 9b 80 00 00 00 mov 0x80(%rbx),%r11d
63e0: 48 01 f2 add %rsi,%rdx
63e3: 48 8d 7c 3e 06 lea 0x6(%rsi,%rdi,1),%rdi
63e8: 48 89 d6 mov %rdx,%rsi
63eb: 48 2b b3 d8 00 00 00 sub 0xd8(%rbx),%rsi
63f2: 41 29 f3 sub %esi,%r11d
return skb->transport_header != (typeof(skb->transport_header))~0U;
}
static inline unsigned char *skb_transport_header(const struct sk_buff *skb)
{
return skb->head + skb->transport_header;
63f5: 45 8d 53 e7 lea -0x19(%r11),%r10d
63f9: 45 85 d2 test %r10d,%r10d
63fc: 0f 8e 46 05 00 00 jle 6948 <vxlan_xmit+0xa48>
skb_push(reply, sizeof(struct ethhdr));
skb_reset_mac_header(reply);
ns = (struct nd_msg *)skb_transport_header(request);
daddr = eth_hdr(request)->h_source;
6402: 80 7a 18 01 cmpb $0x1,0x18(%rdx)
6406: 0f 84 34 05 00 00 je 6940 <vxlan_xmit+0xa40>
ns_olen = request->len - skb_transport_offset(request) - sizeof(*ns);
for (i = 0; i < ns_olen-1; i += (ns->opt[i+1]<<3)) {
640c: 31 f6 xor %esi,%esi
640e: eb 0f jmp 641f <vxlan_xmit+0x51f>
6410: 4c 63 c6 movslq %esi,%r8
skb_push(reply, sizeof(struct ethhdr));
skb_reset_mac_header(reply);
ns = (struct nd_msg *)skb_transport_header(request);
daddr = eth_hdr(request)->h_source;
6413: 42 80 7c 02 18 01 cmpb $0x1,0x18(%rdx,%r8,1)
ns_olen = request->len - skb_transport_offset(request) - sizeof(*ns);
for (i = 0; i < ns_olen-1; i += (ns->opt[i+1]<<3)) {
6419: 0f 84 24 05 00 00 je 6943 <vxlan_xmit+0xa43>
641f: 44 8d 46 01 lea 0x1(%rsi),%r8d
6423: 4d 63 c0 movslq %r8d,%r8
6426: 46 0f b6 44 02 18 movzbl 0x18(%rdx,%r8,1),%r8d
642c: 42 8d 34 c6 lea (%rsi,%r8,8),%esi
6430: 44 39 d6 cmp %r10d,%esi
if (ns->opt[i] == ND_OPT_SOURCE_LL_ADDR) {
6433: 7c db jl 6410 <vxlan_xmit+0x510>
6435: e9 0e 05 00 00 jmpq 6948 <vxlan_xmit+0xa48>
643a: 48 8b 8b d0 00 00 00 mov 0xd0(%rbx),%rcx
6441: 48 8b 93 d8 00 00 00 mov 0xd8(%rbx),%rdx
6448: 48 29 ca sub %rcx,%rdx
644b: 66 89 93 c6 00 00 00 mov %dx,0xc6(%rbx)
ns = (struct nd_msg *)skb_transport_header(request);
daddr = eth_hdr(request)->h_source;
ns_olen = request->len - skb_transport_offset(request) - sizeof(*ns);
for (i = 0; i < ns_olen-1; i += (ns->opt[i+1]<<3)) {
6452: 41 8b b6 d8 08 00 00 mov 0x8d8(%r14),%esi
6459: f7 c6 00 20 00 00 test $0x2000,%esi
645f: 0f 85 00 fc ff ff jne 6065 <vxlan_xmit+0x165>
6465: e9 0f fb ff ff jmpq 5f79 <vxlan_xmit+0x79>
return skb->mac_header != (typeof(skb->mac_header))~0U;
}
static inline void skb_reset_mac_header(struct sk_buff *skb)
{
skb->mac_header = skb->data - skb->head;
646a: 41 f6 86 38 02 00 00 testb $0x80,0x238(%r14)
6471: 80
6472: 0f 85 ef 01 00 00 jne 6667 <vxlan_xmit+0x767>
6478: 66 41 83 be 4c 02 00 cmpw $0x18,0x24c(%r14)
647f: 00 18
6481: 41 0f b6 86 75 02 00 movzbl 0x275(%r14),%eax
6488: 00
info = skb_tunnel_info(skb);
skb_reset_mac_header(skb);
if (vxlan->flags & VXLAN_F_COLLECT_METADATA) {
6489: 0f 85 e5 01 00 00 jne 6674 <vxlan_xmit+0x774>
648f: 83 c0 10 add $0x10,%eax
6492: 8b b3 80 00 00 00 mov 0x80(%rbx),%esi
6498: 89 f2 mov %esi,%edx
struct arphdr *parp;
u8 *arpptr, *sha;
__be32 sip, tip;
struct neighbour *n;
if (dev->flags & IFF_NOARP)
649a: 2b 93 84 00 00 00 sub 0x84(%rbx),%edx
64a0: 39 d0 cmp %edx,%eax
64a2: 0f 87 ae 02 00 00 ja 6756 <vxlan_xmit+0x856>
return (struct arphdr *)skb_network_header(skb);
}
static inline int arp_hdr_len(struct net_device *dev)
{
switch (dev->type) {
64a8: 0f b7 83 c4 00 00 00 movzwl 0xc4(%rbx),%eax
64af: 48 01 c1 add %rax,%rcx
64b2: 0f b7 01 movzwl (%rcx),%eax
64b5: 66 3d 00 01 cmp $0x100,%ax
64b9: 74 0a je 64c5 <vxlan_xmit+0x5c5>
64bb: 66 3d 00 06 cmp $0x600,%ax
#if IS_ENABLED(CONFIG_FIREWIRE_NET)
case ARPHRD_IEEE1394:
/* ARP header, device address and 2 IP addresses */
return sizeof(struct arphdr) + dev->addr_len + sizeof(u32) * 2;
64bf: 0f 85 a2 01 00 00 jne 6667 <vxlan_xmit+0x767>
64c5: 66 83 79 02 08 cmpw $0x8,0x2(%rcx)
return skb->data_len;
}
static inline unsigned int skb_headlen(const struct sk_buff *skb)
{
return skb->len - skb->data_len;
64ca: 0f 85 97 01 00 00 jne 6667 <vxlan_xmit+0x767>
return unlikely(len > skb->len) ? NULL : __pskb_pull(skb, len);
}
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
64d0: 66 81 79 06 00 01 cmpw $0x100,0x6(%rcx)
64d6: 0f 85 8b 01 00 00 jne 6667 <vxlan_xmit+0x767>
skb->transport_header += offset;
}
static inline unsigned char *skb_network_header(const struct sk_buff *skb)
{
return skb->head + skb->network_header;
64dc: 0f b6 41 04 movzbl 0x4(%rcx),%eax
64e0: 41 3a 86 75 02 00 00 cmp 0x275(%r14),%al
dev->stats.tx_dropped++;
goto out;
}
parp = arp_hdr(skb);
if ((parp->ar_hrd != htons(ARPHRD_ETHER) &&
64e7: 0f 85 7a 01 00 00 jne 6667 <vxlan_xmit+0x767>
64ed: 80 79 05 04 cmpb $0x4,0x5(%rcx)
64f1: 0f 85 70 01 00 00 jne 6667 <vxlan_xmit+0x767>
parp->ar_hrd != htons(ARPHRD_IEEE802)) ||
64f7: 4c 8d 61 08 lea 0x8(%rcx),%r12
64fb: 49 8d 14 04 lea (%r12,%rax,1),%rdx
64ff: 8b 44 02 04 mov 0x4(%rdx,%rax,1),%eax
parp->ar_pro != htons(ETH_P_IP) ||
6503: 8b 0a mov (%rdx),%ecx
6505: 3c 7f cmp $0x7f,%al
6507: 89 4c 24 20 mov %ecx,0x20(%rsp)
650b: 89 44 24 2c mov %eax,0x2c(%rsp)
parp->ar_op != htons(ARPOP_REQUEST) ||
parp->ar_hln != dev->addr_len ||
650f: 0f 84 52 01 00 00 je 6667 <vxlan_xmit+0x767>
parp = arp_hdr(skb);
if ((parp->ar_hrd != htons(ARPHRD_ETHER) &&
parp->ar_hrd != htons(ARPHRD_IEEE802)) ||
parp->ar_pro != htons(ETH_P_IP) ||
parp->ar_op != htons(ARPOP_REQUEST) ||
6515: 25 f0 00 00 00 and $0xf0,%eax
651a: 3d e0 00 00 00 cmp $0xe0,%eax
parp->ar_hln != dev->addr_len ||
651f: 0f 84 42 01 00 00 je 6667 <vxlan_xmit+0x767>
6525: 48 8d 74 24 2c lea 0x2c(%rsp),%rsi
parp->ar_pln != 4)
goto out;
arpptr = (u8 *)parp + sizeof(struct arphdr);
652a: 4c 89 f2 mov %r14,%rdx
sha = arpptr;
arpptr += dev->addr_len; /* sha */
652d: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
memcpy(&sip, arpptr, sizeof(sip));
6534: e8 00 00 00 00 callq 6539 <vxlan_xmit+0x639>
6539: 48 85 c0 test %rax,%rax
arpptr += sizeof(sip);
arpptr += dev->addr_len; /* tha */
memcpy(&tip, arpptr, sizeof(tip));
653c: 49 89 c7 mov %rax,%r15
if (ipv4_is_loopback(tip) ||
653f: 0f 84 f7 06 00 00 je 6c3c <vxlan_xmit+0xd3c>
6545: f6 80 ad 00 00 00 c2 testb $0xc2,0xad(%rax)
654c: 0f 84 98 03 00 00 je 68ea <vxlan_xmit+0x9ea>
6552: 48 8d 90 b8 00 00 00 lea 0xb8(%rax),%rdx
ipv4_is_multicast(tip))
goto out;
n = neigh_lookup(&arp_tbl, &tip, dev);
6559: 4c 89 ef mov %r13,%rdi
655c: 48 89 d6 mov %rdx,%rsi
655f: 48 89 54 24 18 mov %rdx,0x18(%rsp)
6564: e8 97 9a ff ff callq 0 <__vxlan_find_mac>
if (n) {
6569: 48 85 c0 test %rax,%rax
if (ipv4_is_loopback(tip) ||
ipv4_is_multicast(tip))
goto out;
n = neigh_lookup(&arp_tbl, &tip, dev);
656c: 48 8b 54 24 18 mov 0x18(%rsp),%rdx
if (n) {
6571: 74 29 je 659c <vxlan_xmit+0x69c>
6573: 48 8b 0d 00 00 00 00 mov 0x0(%rip),%rcx # 657a <vxlan_xmit+0x67a>
struct vxlan_fdb *f;
struct sk_buff *reply;
if (!(n->nud_state & NUD_CONNECTED)) {
657a: 48 89 48 28 mov %rcx,0x28(%rax)
657e: 48 8b 40 30 mov 0x30(%rax),%rax
neigh_release(n);
goto out;
}
f = vxlan_find_mac(vxlan, n->ha);
6582: 66 83 78 d8 0a cmpw $0xa,-0x28(%rax)
6587: 0f 84 4d 03 00 00 je 68da <vxlan_xmit+0x9da>
static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
658d: 83 78 dc 00 cmpl $0x0,-0x24(%rax)
6591: 0f 94 c0 sete %al
6594: 84 c0 test %al,%al
6596: 0f 85 4e 03 00 00 jne 68ea <vxlan_xmit+0x9ea>
if (f)
659c: 48 89 14 24 mov %rdx,(%rsp)
65a0: 44 8b 44 24 2c mov 0x2c(%rsp),%r8d
f->used = jiffies;
65a5: 4d 89 e1 mov %r12,%r9
65a8: 8b 54 24 20 mov 0x20(%rsp),%edx
65ac: 4c 89 64 24 08 mov %r12,0x8(%rsp)
65b1: 4c 89 f1 mov %r14,%rcx
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
static inline bool vxlan_addr_any(const union vxlan_addr *ipa)
{
if (ipa->sa.sa_family == AF_INET6)
65b4: be 06 08 00 00 mov $0x806,%esi
65b9: bf 02 00 00 00 mov $0x2,%edi
return ipv6_addr_any(&ipa->sin6.sin6_addr);
else
return ipa->sin.sin_addr.s_addr == htonl(INADDR_ANY);
65be: e8 00 00 00 00 callq 65c3 <vxlan_xmit+0x6c3>
65c3: 4c 89 ff mov %r15,%rdi
neigh_release(n);
goto out;
}
f = vxlan_find_mac(vxlan, n->ha);
if (f && vxlan_addr_any(&(first_remote_rcu(f)->remote_ip))) {
65c6: 49 89 c4 mov %rax,%r12
65c9: e8 72 a2 ff ff callq 840 <neigh_release>
/* bridge-local neighbor */
neigh_release(n);
goto out;
}
reply = arp_create(ARPOP_REPLY, ETH_P_ARP, sip, dev, tip, sha,
65ce: 4d 85 e4 test %r12,%r12
65d1: 0f 84 90 00 00 00 je 6667 <vxlan_xmit+0x767>
65d7: 49 8b 8c 24 d8 00 00 mov 0xd8(%r12),%rcx
65de: 00
65df: 49 8b 84 24 d0 00 00 mov 0xd0(%r12),%rax
65e6: 00
65e7: 48 89 ca mov %rcx,%rdx
65ea: 48 29 c2 sub %rax,%rdx
65ed: 66 41 89 94 24 c6 00 mov %dx,0xc6(%r12)
65f4: 00 00
65f6: 41 0f b7 94 24 c4 00 movzwl 0xc4(%r12),%edx
65fd: 00 00
n->ha, sha);
neigh_release(n);
if (reply == NULL)
65ff: 48 01 d0 add %rdx,%rax
6602: 41 8b 94 24 80 00 00 mov 0x80(%r12),%edx
6609: 00
return skb->mac_header != (typeof(skb->mac_header))~0U;
}
static inline void skb_reset_mac_header(struct sk_buff *skb)
{
skb->mac_header = skb->data - skb->head;
660a: 48 29 c8 sub %rcx,%rax
660d: 29 c2 sub %eax,%edx
660f: 41 3b 94 24 84 00 00 cmp 0x84(%r12),%edx
6616: 00
6617: 41 89 94 24 80 00 00 mov %edx,0x80(%r12)
661e: 00
661f: 0f 82 54 05 00 00 jb 6b79 <vxlan_xmit+0xc79>
6625: 89 c0 mov %eax,%eax
return skb->inner_transport_header - skb->inner_network_header;
}
static inline int skb_network_offset(const struct sk_buff *skb)
{
return skb_network_header(skb) - skb->data;
6627: 41 80 a4 24 90 00 00 andb $0xf8,0x90(%r12)
662e: 00 f8
6630: 4c 89 e7 mov %r12,%rdi
}
unsigned char *skb_pull(struct sk_buff *skb, unsigned int len);
static inline unsigned char *__skb_pull(struct sk_buff *skb, unsigned int len)
{
skb->len -= len;
6633: 48 01 c8 add %rcx,%rax
6636: 49 89 84 24 d8 00 00 mov %rax,0xd8(%r12)
663d: 00
663e: 41 0f b6 84 24 91 00 movzbl 0x91(%r12),%eax
6645: 00 00
6647: 83 e0 f9 and $0xfffffff9,%eax
664a: 83 c8 02 or $0x2,%eax
664d: 41 88 84 24 91 00 00 mov %al,0x91(%r12)
6654: 00
BUG_ON(skb->len < skb->data_len);
return skb->data += len;
6655: e8 00 00 00 00 callq 665a <vxlan_xmit+0x75a>
goto out;
skb_reset_mac_header(reply);
__skb_pull(reply, skb_network_offset(reply));
reply->ip_summed = CHECKSUM_UNNECESSARY;
reply->pkt_type = PACKET_HOST;
665a: 83 e8 01 sub $0x1,%eax
665d: 75 08 jne 6667 <vxlan_xmit+0x767>
665f: 49 83 86 60 01 00 00 addq $0x1,0x160(%r14)
6666: 01
6667: 48 89 df mov %rbx,%rdi
666a: e8 00 00 00 00 callq 666f <vxlan_xmit+0x76f>
if (reply == NULL)
goto out;
skb_reset_mac_header(reply);
__skb_pull(reply, skb_network_offset(reply));
reply->ip_summed = CHECKSUM_UNNECESSARY;
666f: e9 f9 f9 ff ff jmpq 606d <vxlan_xmit+0x16d>
6674: 8d 44 00 10 lea 0x10(%rax,%rax,1),%eax
6678: e9 15 fe ff ff jmpq 6492 <vxlan_xmit+0x592>
667d: 8b 8b 80 00 00 00 mov 0x80(%rbx),%ecx
6683: 89 c8 mov %ecx,%eax
reply->pkt_type = PACKET_HOST;
if (netif_rx_ni(reply) == NET_RX_DROP)
6685: 2b 83 84 00 00 00 sub 0x84(%rbx),%eax
668b: 83 f8 13 cmp $0x13,%eax
668e: 0f 86 0a 02 00 00 jbe 689e <vxlan_xmit+0x99e>
if (reply == NULL)
goto out;
if (netif_rx_ni(reply) == NET_RX_DROP)
dev->stats.rx_dropped++;
6694: 44 0f b7 83 c4 00 00 movzwl 0xc4(%rbx),%r8d
669b: 00
vxlan_ip_miss(dev, &ipa);
}
out:
consume_skb(skb);
669c: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
struct nd_msg *msg;
msg = (struct nd_msg *)skb_transport_header(skb);
if (msg->icmph.icmp6_code == 0 &&
msg->icmph.icmp6_type == NDISC_NEIGHBOUR_SOLICITATION)
return neigh_reduce(dev, skb);
66a3: 49 01 d0 add %rdx,%r8
#endif
default:
/* ARP header, plus 2 device addresses, plus 2 IP addresses. */
return sizeof(struct arphdr) + (dev->addr_len + sizeof(u32)) * 2;
66a6: 4c 89 f2 mov %r14,%rdx
66a9: 49 8d 70 10 lea 0x10(%r8),%rsi
66ad: 4c 89 44 24 20 mov %r8,0x20(%rsp)
66b2: e8 00 00 00 00 callq 66b7 <vxlan_xmit+0x7b7>
return skb->data_len;
}
static inline unsigned int skb_headlen(const struct sk_buff *skb)
{
return skb->len - skb->data_len;
66b7: 48 85 c0 test %rax,%rax
66ba: 4c 8b 44 24 20 mov 0x20(%rsp),%r8
return unlikely(len > skb->len) ? NULL : __pskb_pull(skb, len);
}
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
66bf: 0f 85 a8 fa ff ff jne 616d <vxlan_xmit+0x26d>
skb->transport_header += offset;
}
static inline unsigned char *skb_network_header(const struct sk_buff *skb)
{
return skb->head + skb->network_header;
66c5: 41 f6 86 d8 08 00 00 testb $0x10,0x8d8(%r14)
66cc: 10
struct iphdr *pip;
if (!pskb_may_pull(skb, sizeof(struct iphdr)))
return false;
pip = ip_hdr(skb);
n = neigh_lookup(&arp_tbl, &pip->daddr, dev);
66cd: 74 28 je 66f7 <vxlan_xmit+0x7f7>
66cf: 48 8d 7c 24 30 lea 0x30(%rsp),%rdi
66d4: b9 07 00 00 00 mov $0x7,%ecx
66d9: 48 8d 74 24 30 lea 0x30(%rsp),%rsi
66de: f3 ab rep stos %eax,%es:(%rdi)
66e0: 66 c7 44 24 30 02 00 movw $0x2,0x30(%rsp)
if (!n && (vxlan->flags & VXLAN_F_L3MISS)) {
66e7: 4c 89 f7 mov %r14,%rdi
66ea: 41 8b 40 10 mov 0x10(%r8),%eax
66ee: 89 44 24 34 mov %eax,0x34(%rsp)
66f2: e8 59 b9 ff ff callq 2050 <vxlan_ip_miss>
66f7: c6 44 24 18 00 movb $0x0,0x18(%rsp)
66fc: e9 e3 f8 ff ff jmpq 5fe4 <vxlan_xmit+0xe4>
union vxlan_addr ipa = {
6701: 41 f6 86 d8 08 00 00 testb $0x8,0x8d8(%r14)
6708: 08
.sin.sin_addr.s_addr = pip->daddr,
.sin.sin_family = AF_INET,
};
vxlan_ip_miss(dev, &ipa);
6709: 74 11 je 671c <vxlan_xmit+0x81c>
670b: 41 f6 07 01 testb $0x1,(%r15)
if (!pskb_may_pull(skb, sizeof(struct iphdr)))
return false;
pip = ip_hdr(skb);
n = neigh_lookup(&arp_tbl, &pip->daddr, dev);
if (!n && (vxlan->flags & VXLAN_F_L3MISS)) {
union vxlan_addr ipa = {
670f: 75 0b jne 671c <vxlan_xmit+0x81c>
6711: 4c 89 fe mov %r15,%rsi
6714: 4c 89 ef mov %r13,%rdi
.sin.sin_addr.s_addr = pip->daddr,
.sin.sin_family = AF_INET,
};
vxlan_ip_miss(dev, &ipa);
6717: e8 e4 b9 ff ff callq 2100 <vxlan_fdb_miss>
if (!pskb_may_pull(skb, sizeof(struct iphdr)))
return false;
pip = ip_hdr(skb);
n = neigh_lookup(&arp_tbl, &pip->daddr, dev);
if (!n && (vxlan->flags & VXLAN_F_L3MISS)) {
union vxlan_addr ipa = {
671c: 49 83 86 68 01 00 00 addq $0x1,0x168(%r14)
6723: 01
.sin.sin_addr.s_addr = pip->daddr,
.sin.sin_family = AF_INET,
};
vxlan_ip_miss(dev, &ipa);
6724: e9 3c f9 ff ff jmpq 6065 <vxlan_xmit+0x165>
6729: 83 fa 3f cmp $0x3f,%edx
672c: 0f 86 73 f8 ff ff jbe 5fa5 <vxlan_xmit+0xa5>
}
if (f == NULL) {
f = vxlan_find_mac(vxlan, all_zeros_mac);
if (f == NULL) {
if ((vxlan->flags & VXLAN_F_L2MISS) &&
6732: be 40 00 00 00 mov $0x40,%esi
6737: 48 89 df mov %rbx,%rdi
673a: 29 c6 sub %eax,%esi
673c: e8 00 00 00 00 callq 6741 <vxlan_xmit+0x841>
!is_multicast_ether_addr(eth->h_dest))
vxlan_fdb_miss(vxlan, eth->h_dest);
6741: 48 85 c0 test %rax,%rax
6744: 48 8b 8b d0 00 00 00 mov 0xd0(%rbx),%rcx
674b: 0f 84 54 f8 ff ff je 5fa5 <vxlan_xmit+0xa5>
dev->stats.tx_dropped++;
6751: e9 dd fa ff ff jmpq 6233 <vxlan_xmit+0x333>
6756: 39 c6 cmp %eax,%esi
6758: 72 15 jb 676f <vxlan_xmit+0x86f>
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
return 1;
if (unlikely(len > skb->len))
675a: 29 d0 sub %edx,%eax
675c: 48 89 df mov %rbx,%rdi
675f: 89 c6 mov %eax,%esi
6761: e8 00 00 00 00 callq 6766 <vxlan_xmit+0x866>
return 0;
return __pskb_pull_tail(skb, len - skb_headlen(skb)) != NULL;
6766: 48 85 c0 test %rax,%rax
6769: 0f 85 0a 05 00 00 jne 6c79 <vxlan_xmit+0xd79>
676f: 49 83 86 68 01 00 00 addq $0x1,0x168(%r14)
6776: 01
if (vxlan->flags & VXLAN_F_PROXY) {
eth = eth_hdr(skb);
if (ntohs(eth->h_proto) == ETH_P_ARP)
return arp_reduce(dev, skb);
#if IS_ENABLED(CONFIG_IPV6)
else if (ntohs(eth->h_proto) == ETH_P_IPV6 &&
6777: e9 eb fe ff ff jmpq 6667 <vxlan_xmit+0x767>
677c: 48 89 c7 mov %rax,%rdi
677f: e8 00 00 00 00 callq 6784 <vxlan_xmit+0x884>
6784: e9 29 fa ff ff jmpq 61b2 <vxlan_xmit+0x2b2>
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
return 1;
if (unlikely(len > skb->len))
6789: 41 0f b6 96 75 02 00 movzbl 0x275(%r14),%edx
6790: 00
return 0;
return __pskb_pull_tail(skb, len - skb_headlen(skb)) != NULL;
6791: 4c 8d 46 06 lea 0x6(%rsi),%r8
6795: 83 fa 08 cmp $0x8,%edx
struct neighbour *n;
if (dev->flags & IFF_NOARP)
goto out;
if (!pskb_may_pull(skb, arp_hdr_len(dev))) {
6798: 0f 83 a3 00 00 00 jae 6841 <vxlan_xmit+0x941>
679e: f6 c2 04 test $0x4,%dl
dev->stats.tx_dropped++;
67a1: 0f 85 d4 03 00 00 jne 6b7b <vxlan_xmit+0xc7b>
67a7: 85 d2 test %edx,%edx
67a9: 74 25 je 67d0 <vxlan_xmit+0x8d0>
67ab: 0f b6 0e movzbl (%rsi),%ecx
*/
static inline void neigh_release(struct neighbour *neigh)
{
if (atomic_dec_and_test(&neigh->refcnt))
neigh_destroy(neigh);
67ae: f6 c2 02 test $0x2,%dl
67b1: 88 4e 06 mov %cl,0x6(%rsi)
67b4: 0f 85 15 04 00 00 jne 6bcf <vxlan_xmit+0xccf>
if (n) {
bool diff;
diff = !ether_addr_equal(eth_hdr(skb)->h_dest, n->ha);
if (diff) {
memcpy(eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest,
67ba: 0f b7 bb c6 00 00 00 movzwl 0xc6(%rbx),%edi
67c1: 41 0f b6 96 75 02 00 movzbl 0x275(%r14),%edx
67c8: 00
67c9: 48 8b 8b d0 00 00 00 mov 0xd0(%rbx),%rcx
67d0: 48 01 f9 add %rdi,%rcx
67d3: 83 fa 08 cmp $0x8,%edx
67d6: 48 8d b0 b8 00 00 00 lea 0xb8(%rax),%rsi
67dd: 73 31 jae 6810 <vxlan_xmit+0x910>
67df: f6 c2 04 test $0x4,%dl
67e2: 0f 85 bd 03 00 00 jne 6ba5 <vxlan_xmit+0xca5>
67e8: 85 d2 test %edx,%edx
67ea: 0f 84 b8 f9 ff ff je 61a8 <vxlan_xmit+0x2a8>
67f0: 0f b6 3e movzbl (%rsi),%edi
67f3: f6 c2 02 test $0x2,%dl
67f6: 40 88 39 mov %dil,(%rcx)
67f9: 0f 84 a9 f9 ff ff je 61a8 <vxlan_xmit+0x2a8>
67ff: 89 d2 mov %edx,%edx
dev->addr_len);
memcpy(eth_hdr(skb)->h_dest, n->ha, dev->addr_len);
6801: 0f b7 74 16 fe movzwl -0x2(%rsi,%rdx,1),%esi
6806: 66 89 74 11 fe mov %si,-0x2(%rcx,%rdx,1)
680b: e9 98 f9 ff ff jmpq 61a8 <vxlan_xmit+0x2a8>
6810: 48 8b b8 b8 00 00 00 mov 0xb8(%rax),%rdi
6817: 48 89 39 mov %rdi,(%rcx)
681a: 89 d7 mov %edx,%edi
681c: 4c 8b 44 3e f8 mov -0x8(%rsi,%rdi,1),%r8
6821: 4c 89 44 39 f8 mov %r8,-0x8(%rcx,%rdi,1)
6826: 48 8d 79 08 lea 0x8(%rcx),%rdi
682a: 48 83 e7 f8 and $0xfffffffffffffff8,%rdi
682e: 48 29 f9 sub %rdi,%rcx
6831: 48 29 ce sub %rcx,%rsi
6834: 01 d1 add %edx,%ecx
6836: c1 e9 03 shr $0x3,%ecx
6839: f3 48 a5 rep movsq %ds:(%rsi),%es:(%rdi)
683c: e9 67 f9 ff ff jmpq 61a8 <vxlan_xmit+0x2a8>
6841: 48 8b 0e mov (%rsi),%rcx
6844: 48 89 4e 06 mov %rcx,0x6(%rsi)
6848: 89 d1 mov %edx,%ecx
684a: 48 8b 7c 0e f8 mov -0x8(%rsi,%rcx,1),%rdi
684f: 49 89 7c 08 f8 mov %rdi,-0x8(%r8,%rcx,1)
6854: 48 8d 7e 0e lea 0xe(%rsi),%rdi
6858: 48 83 e7 f8 and $0xfffffffffffffff8,%rdi
685c: 49 29 f8 sub %rdi,%r8
685f: 42 8d 0c 02 lea (%rdx,%r8,1),%ecx
6863: 4c 29 c6 sub %r8,%rsi
6866: c1 e9 03 shr $0x3,%ecx
6869: f3 48 a5 rep movsq %ds:(%rsi),%es:(%rdi)
686c: e9 49 ff ff ff jmpq 67ba <vxlan_xmit+0x8ba>
if (n) {
bool diff;
diff = !ether_addr_equal(eth_hdr(skb)->h_dest, n->ha);
if (diff) {
memcpy(eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest,
6871: 83 f9 27 cmp $0x27,%ecx
6874: 0f 86 7d fe ff ff jbe 66f7 <vxlan_xmit+0x7f7>
687a: be 28 00 00 00 mov $0x28,%esi
687f: 48 89 df mov %rbx,%rdi
6882: 29 c6 sub %eax,%esi
6884: e8 00 00 00 00 callq 6889 <vxlan_xmit+0x989>
6889: 48 85 c0 test %rax,%rax
688c: 0f 84 65 fe ff ff je 66f7 <vxlan_xmit+0x7f7>
6892: 48 8b 93 d0 00 00 00 mov 0xd0(%rbx),%rdx
6899: e9 9a f8 ff ff jmpq 6138 <vxlan_xmit+0x238>
689e: 83 f9 13 cmp $0x13,%ecx
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
return 1;
if (unlikely(len > skb->len))
68a1: 0f 86 50 fe ff ff jbe 66f7 <vxlan_xmit+0x7f7>
68a7: be 14 00 00 00 mov $0x14,%esi
return 0;
return __pskb_pull_tail(skb, len - skb_headlen(skb)) != NULL;
68ac: 48 89 df mov %rbx,%rdi
68af: 29 c6 sub %eax,%esi
68b1: e8 00 00 00 00 callq 68b6 <vxlan_xmit+0x9b6>
68b6: 48 85 c0 test %rax,%rax
#if IS_ENABLED(CONFIG_IPV6)
case ETH_P_IPV6:
{
struct ipv6hdr *pip6;
if (!pskb_may_pull(skb, sizeof(struct ipv6hdr)))
68b9: 0f 84 38 fe ff ff je 66f7 <vxlan_xmit+0x7f7>
68bf: 48 8b 93 d0 00 00 00 mov 0xd0(%rbx),%rdx
68c6: e9 c9 fd ff ff jmpq 6694 <vxlan_xmit+0x794>
68cb: c6 44 24 18 01 movb $0x1,0x18(%rsp)
static inline int pskb_may_pull(struct sk_buff *skb, unsigned int len)
{
if (likely(len <= skb_headlen(skb)))
return 1;
if (unlikely(len > skb->len))
68d0: e9 1b f9 ff ff jmpq 61f0 <vxlan_xmit+0x2f0>
68d5: e8 00 00 00 00 callq 68da <vxlan_xmit+0x9da>
return 0;
return __pskb_pull_tail(skb, len - skb_headlen(skb)) != NULL;
68da: 48 8b 48 e0 mov -0x20(%rax),%rcx
68de: 48 0b 48 e8 or -0x18(%rax),%rcx
68e2: 0f 94 c0 sete %al
68e5: e9 aa fc ff ff jmpq 6594 <vxlan_xmit+0x694>
switch (ntohs(eth_hdr(skb)->h_proto)) {
case ETH_P_IP:
{
struct iphdr *pip;
if (!pskb_may_pull(skb, sizeof(struct iphdr)))
68ea: 4c 89 ff mov %r15,%rdi
68ed: e8 4e 9f ff ff callq 840 <neigh_release>
68f2: e9 70 fd ff ff jmpq 6667 <vxlan_xmit+0x767>
68f7: 41 f6 86 d8 08 00 00 testb $0x10,0x8d8(%r14)
68fe: 10
const u8 *mac)
{
struct vxlan_fdb *f;
f = __vxlan_find_mac(vxlan, mac);
if (f)
68ff: 0f 84 62 fd ff ff je 6667 <vxlan_xmit+0x767>
if (fdst)
vxlan_xmit_one(skb, dev, fdst, did_rsc);
else
kfree_skb(skb);
return NETDEV_TX_OK;
}
6905: 48 8d 7c 24 30 lea 0x30(%rsp),%rdi
static inline bool ipv6_addr_any(const struct in6_addr *a)
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
const unsigned long *ul = (const unsigned long *)a;
return (ul[0] | ul[1]) == 0UL;
690a: 31 c0 xor %eax,%eax
690c: b9 07 00 00 00 mov $0x7,%ecx
6911: 48 8d 74 24 30 lea 0x30(%rsp),%rsi
6916: f3 ab rep stos %eax,%es:(%rdi)
6918: 66 c7 44 24 30 0a 00 movw $0xa,0x30(%rsp)
}
reply = vxlan_na_create(skb, n,
!!(f ? f->flags & NTF_ROUTER : 0));
neigh_release(n);
691f: 4c 89 f7 mov %r14,%rdi
6922: 49 8b 44 24 08 mov 0x8(%r12),%rax
goto out;
if (netif_rx_ni(reply) == NET_RX_DROP)
dev->stats.rx_dropped++;
} else if (vxlan->flags & VXLAN_F_L3MISS) {
6927: 49 8b 54 24 10 mov 0x10(%r12),%rdx
692c: 48 89 44 24 38 mov %rax,0x38(%rsp)
6931: 48 89 54 24 40 mov %rdx,0x40(%rsp)
union vxlan_addr ipa = {
6936: e8 15 b7 ff ff callq 2050 <vxlan_ip_miss>
693b: e9 27 fd ff ff jmpq 6667 <vxlan_xmit+0x767>
6940: 45 31 c0 xor %r8d,%r8d
.sin6.sin6_addr = msg->target,
.sin6.sin6_family = AF_INET6,
};
vxlan_ip_miss(dev, &ipa);
6943: 4a 8d 7c 02 1a lea 0x1a(%rdx,%r8,1),%rdi
if (netif_rx_ni(reply) == NET_RX_DROP)
dev->stats.rx_dropped++;
} else if (vxlan->flags & VXLAN_F_L3MISS) {
union vxlan_addr ipa = {
6948: 0f b7 c9 movzwl %cx,%ecx
694b: be 0e 00 00 00 mov $0xe,%esi
.sin6.sin6_addr = msg->target,
.sin6.sin6_family = AF_INET6,
};
vxlan_ip_miss(dev, &ipa);
6950: 48 89 54 24 10 mov %rdx,0x10(%rsp)
if (netif_rx_ni(reply) == NET_RX_DROP)
dev->stats.rx_dropped++;
} else if (vxlan->flags & VXLAN_F_L3MISS) {
union vxlan_addr ipa = {
6955: 48 01 c8 add %rcx,%rax
6958: 8b 0f mov (%rdi),%ecx
695a: 89 08 mov %ecx,(%rax)
695c: 0f b7 4f 04 movzwl 0x4(%rdi),%ecx
6960: 4c 89 e7 mov %r12,%rdi
6963: 66 89 48 04 mov %cx,0x4(%rax)
.sin6.sin6_addr = msg->target,
.sin6.sin6_family = AF_INET6,
};
vxlan_ip_miss(dev, &ipa);
6967: 41 0f b7 84 24 c6 00 movzwl 0xc6(%r12),%eax
696e: 00 00
6970: 49 03 84 24 d0 00 00 add 0xd0(%r12),%rax
6977: 00
skb->network_header += offset;
}
static inline unsigned char *skb_mac_header(const struct sk_buff *skb)
{
return skb->head + skb->mac_header;
6978: 41 8b 8f b8 00 00 00 mov 0xb8(%r15),%ecx
ether_addr_copy(eth_hdr(reply)->h_dest, daddr);
ether_addr_copy(eth_hdr(reply)->h_source, n->ha);
eth_hdr(reply)->h_proto = htons(ETH_P_IPV6);
reply->protocol = htons(ETH_P_IPV6);
skb_pull(reply, sizeof(struct ethhdr));
697f: 89 48 06 mov %ecx,0x6(%rax)
6982: 41 0f b7 8f bc 00 00 movzwl 0xbc(%r15),%ecx
6989: 00
* Please note: dst & src must both be aligned to u16.
*/
static inline void ether_addr_copy(u8 *dst, const u8 *src)
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
*(u32 *)dst = *(const u32 *)src;
698a: 66 89 48 0a mov %cx,0xa(%rax)
*(u16 *)(dst + 4) = *(const u16 *)(src + 4);
698e: 41 0f b7 84 24 c6 00 movzwl 0xc6(%r12),%eax
6995: 00 00
6997: 49 8b 8c 24 d0 00 00 mov 0xd0(%r12),%rcx
699e: 00
699f: 66 c7 44 01 0c 86 dd movw $0xdd86,0xc(%rcx,%rax,1)
69a6: 66 41 c7 84 24 c0 00 movw $0xdd86,0xc0(%r12)
69ad: 00 00 86 dd
* Please note: dst & src must both be aligned to u16.
*/
static inline void ether_addr_copy(u8 *dst, const u8 *src)
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
*(u32 *)dst = *(const u32 *)src;
69b1: e8 00 00 00 00 callq 69b6 <vxlan_xmit+0xab6>
*(u16 *)(dst + 4) = *(const u16 *)(src + 4);
69b6: 49 8b 84 24 d8 00 00 mov 0xd8(%r12),%rax
69bd: 00
}
/* Ethernet header */
ether_addr_copy(eth_hdr(reply)->h_dest, daddr);
ether_addr_copy(eth_hdr(reply)->h_source, n->ha);
eth_hdr(reply)->h_proto = htons(ETH_P_IPV6);
69be: 49 2b 84 24 d0 00 00 sub 0xd0(%r12),%rax
69c5: 00
69c6: be 28 00 00 00 mov $0x28,%esi
69cb: 4c 89 e7 mov %r12,%rdi
69ce: 66 41 89 84 24 c4 00 mov %ax,0xc4(%r12)
69d5: 00 00
reply->protocol = htons(ETH_P_IPV6);
69d7: e8 00 00 00 00 callq 69dc <vxlan_xmit+0xadc>
69dc: 45 0f b7 ac 24 c4 00 movzwl 0xc4(%r12),%r13d
69e3: 00 00
skb_pull(reply, sizeof(struct ethhdr));
69e5: 45 31 c0 xor %r8d,%r8d
return skb->head + skb->network_header;
}
static inline void skb_reset_network_header(struct sk_buff *skb)
{
skb->network_header = skb->data - skb->head;
69e8: b9 0a 00 00 00 mov $0xa,%ecx
69ed: 4d 03 ac 24 d0 00 00 add 0xd0(%r12),%r13
69f4: 00
69f5: 44 89 c0 mov %r8d,%eax
skb_reset_network_header(reply);
skb_put(reply, sizeof(struct ipv6hdr));
69f8: 44 89 44 24 18 mov %r8d,0x18(%rsp)
69fd: 4c 89 ef mov %r13,%rdi
6a00: f3 ab rep stos %eax,%es:(%rdi)
6a02: b9 60 00 00 00 mov $0x60,%ecx
6a07: 41 88 4d 00 mov %cl,0x0(%r13)
6a0b: 0f b7 83 c4 00 00 00 movzwl 0xc4(%rbx),%eax
skb->transport_header += offset;
}
static inline unsigned char *skb_network_header(const struct sk_buff *skb)
{
return skb->head + skb->network_header;
6a12: 48 8b b3 d0 00 00 00 mov 0xd0(%rbx),%rsi
/* IPv6 header */
pip6 = ipv6_hdr(reply);
memset(pip6, 0, sizeof(struct ipv6hdr));
6a19: 0f b6 04 06 movzbl (%rsi,%rax,1),%eax
6a1d: 41 c6 45 06 3a movb $0x3a,0x6(%r13)
6a22: 41 c6 45 07 ff movb $0xff,0x7(%r13)
6a27: 83 e0 0f and $0xf,%eax
6a2a: 09 c8 or %ecx,%eax
6a2c: 41 88 45 00 mov %al,0x0(%r13)
6a30: 0f b7 83 c4 00 00 00 movzwl 0xc4(%rbx),%eax
pip6->version = 6;
6a37: 48 8b 8b d0 00 00 00 mov 0xd0(%rbx),%rcx
pip6->priority = ipv6_hdr(request)->priority;
6a3e: 48 8b 74 01 08 mov 0x8(%rcx,%rax,1),%rsi
6a43: 48 8b 7c 01 10 mov 0x10(%rcx,%rax,1),%rdi
6a48: 49 89 75 18 mov %rsi,0x18(%r13)
6a4c: 49 89 7d 20 mov %rdi,0x20(%r13)
pip6->nexthdr = IPPROTO_ICMPV6;
6a50: 49 8b b7 90 01 00 00 mov 0x190(%r15),%rsi
/* IPv6 header */
pip6 = ipv6_hdr(reply);
memset(pip6, 0, sizeof(struct ipv6hdr));
pip6->version = 6;
pip6->priority = ipv6_hdr(request)->priority;
6a57: 49 8b bf 98 01 00 00 mov 0x198(%r15),%rdi
6a5e: 49 89 75 08 mov %rsi,0x8(%r13)
pip6->nexthdr = IPPROTO_ICMPV6;
pip6->hop_limit = 255;
pip6->daddr = ipv6_hdr(request)->saddr;
6a62: 49 89 7d 10 mov %rdi,0x10(%r13)
6a66: be 28 00 00 00 mov $0x28,%esi
6a6b: 4c 89 e7 mov %r12,%rdi
6a6e: e8 00 00 00 00 callq 6a73 <vxlan_xmit+0xb73>
6a73: 49 8b 84 24 d8 00 00 mov 0xd8(%r12),%rax
6a7a: 00
6a7b: 49 2b 84 24 d0 00 00 sub 0xd0(%r12),%rax
6a82: 00
pip6->saddr = *(struct in6_addr *)n->primary_key;
6a83: be 20 00 00 00 mov $0x20,%esi
6a88: 4c 89 e7 mov %r12,%rdi
6a8b: 66 41 89 84 24 c2 00 mov %ax,0xc2(%r12)
6a92: 00 00
6a94: e8 00 00 00 00 callq 6a99 <vxlan_xmit+0xb99>
skb_pull(reply, sizeof(struct ipv6hdr));
6a99: 44 8b 44 24 18 mov 0x18(%rsp),%r8d
6a9e: 49 89 c2 mov %rax,%r10
6aa1: 48 89 c7 mov %rax,%rdi
return skb->head + skb->transport_header;
}
static inline void skb_reset_transport_header(struct sk_buff *skb)
{
skb->transport_header = skb->data - skb->head;
6aa4: b9 08 00 00 00 mov $0x8,%ecx
6aa9: 48 8b 54 24 10 mov 0x10(%rsp),%rdx
6aae: be 20 00 00 00 mov $0x20,%esi
skb_reset_transport_header(reply);
na = (struct nd_msg *)skb_put(reply, sizeof(*na) + na_olen);
6ab3: 44 89 c0 mov %r8d,%eax
6ab6: f3 ab rep stos %eax,%es:(%rdi)
6ab8: 0f b6 44 24 20 movzbl 0x20(%rsp),%eax
6abd: 41 c6 02 88 movb $0x88,(%r10)
6ac1: 4c 89 d7 mov %r10,%rdi
6ac4: 4c 89 54 24 20 mov %r10,0x20(%rsp)
/* Neighbor Advertisement */
memset(na, 0, sizeof(*na)+na_olen);
6ac9: c1 e0 07 shl $0x7,%eax
6acc: 83 c8 60 or $0x60,%eax
pip6->saddr = *(struct in6_addr *)n->primary_key;
skb_pull(reply, sizeof(struct ipv6hdr));
skb_reset_transport_header(reply);
na = (struct nd_msg *)skb_put(reply, sizeof(*na) + na_olen);
6acf: 41 88 42 04 mov %al,0x4(%r10)
/* Neighbor Advertisement */
memset(na, 0, sizeof(*na)+na_olen);
6ad3: 48 8b 42 08 mov 0x8(%rdx),%rax
6ad7: 48 8b 52 10 mov 0x10(%rdx),%rdx
na->icmph.icmp6_type = NDISC_NEIGHBOUR_ADVERTISEMENT;
na->icmph.icmp6_router = isrouter;
na->icmph.icmp6_override = 1;
na->icmph.icmp6_solicited = 1;
na->target = ns->target;
6adb: 49 89 42 08 mov %rax,0x8(%r10)
ether_addr_copy(&na->opt[2], n->ha);
na->opt[0] = ND_OPT_TARGET_LL_ADDR;
na->opt[1] = na_olen >> 3;
na->icmph.icmp6_cksum = csum_ipv6_magic(&pip6->saddr,
6adf: 49 89 52 10 mov %rdx,0x10(%r10)
skb_reset_transport_header(reply);
na = (struct nd_msg *)skb_put(reply, sizeof(*na) + na_olen);
/* Neighbor Advertisement */
memset(na, 0, sizeof(*na)+na_olen);
6ae3: 41 8b 87 b8 00 00 00 mov 0xb8(%r15),%eax
na->icmph.icmp6_type = NDISC_NEIGHBOUR_ADVERTISEMENT;
na->icmph.icmp6_router = isrouter;
6aea: 31 d2 xor %edx,%edx
6aec: 41 89 42 1a mov %eax,0x1a(%r10)
na = (struct nd_msg *)skb_put(reply, sizeof(*na) + na_olen);
/* Neighbor Advertisement */
memset(na, 0, sizeof(*na)+na_olen);
na->icmph.icmp6_type = NDISC_NEIGHBOUR_ADVERTISEMENT;
6af0: 41 0f b7 87 bc 00 00 movzwl 0xbc(%r15),%eax
6af7: 00
na->target = ns->target;
ether_addr_copy(&na->opt[2], n->ha);
na->opt[0] = ND_OPT_TARGET_LL_ADDR;
na->opt[1] = na_olen >> 3;
na->icmph.icmp6_cksum = csum_ipv6_magic(&pip6->saddr,
6af8: 41 c6 42 18 02 movb $0x2,0x18(%r10)
/* Neighbor Advertisement */
memset(na, 0, sizeof(*na)+na_olen);
na->icmph.icmp6_type = NDISC_NEIGHBOUR_ADVERTISEMENT;
na->icmph.icmp6_router = isrouter;
na->icmph.icmp6_override = 1;
na->icmph.icmp6_solicited = 1;
6afd: 41 c6 42 19 01 movb $0x1,0x19(%r10)
6b02: 66 41 89 42 1e mov %ax,0x1e(%r10)
na->target = ns->target;
6b07: e8 00 00 00 00 callq 6b0c <vxlan_xmit+0xc0c>
6b0c: 49 8d 75 18 lea 0x18(%r13),%rsi
6b10: 49 8d 7d 08 lea 0x8(%r13),%rdi
* Please note: dst & src must both be aligned to u16.
*/
static inline void ether_addr_copy(u8 *dst, const u8 *src)
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
*(u32 *)dst = *(const u32 *)src;
6b14: 41 89 c0 mov %eax,%r8d
6b17: b9 3a 00 00 00 mov $0x3a,%ecx
6b1c: ba 20 00 00 00 mov $0x20,%edx
*(u16 *)(dst + 4) = *(const u16 *)(src + 4);
6b21: e8 00 00 00 00 callq 6b26 <vxlan_xmit+0xc26>
6b26: 4c 8b 54 24 20 mov 0x20(%rsp),%r10
ether_addr_copy(&na->opt[2], n->ha);
na->opt[0] = ND_OPT_TARGET_LL_ADDR;
6b2b: be 28 00 00 00 mov $0x28,%esi
na->opt[1] = na_olen >> 3;
6b30: 4c 89 e7 mov %r12,%rdi
6b33: 66 41 89 42 02 mov %ax,0x2(%r10)
na->icmph.icmp6_cksum = csum_ipv6_magic(&pip6->saddr,
6b38: 66 41 c7 45 04 00 20 movw $0x2000,0x4(%r13)
&pip6->daddr, sizeof(*na)+na_olen, IPPROTO_ICMPV6,
6b3f: e8 00 00 00 00 callq 6b44 <vxlan_xmit+0xc44>
na->target = ns->target;
ether_addr_copy(&na->opt[2], n->ha);
na->opt[0] = ND_OPT_TARGET_LL_ADDR;
na->opt[1] = na_olen >> 3;
na->icmph.icmp6_cksum = csum_ipv6_magic(&pip6->saddr,
6b44: 41 0f b6 84 24 91 00 movzbl 0x91(%r12),%eax
6b4b: 00 00
6b4d: 4c 89 ff mov %r15,%rdi
6b50: 83 e0 f9 and $0xfffffff9,%eax
6b53: 83 c8 02 or $0x2,%eax
6b56: 41 88 84 24 91 00 00 mov %al,0x91(%r12)
6b5d: 00
&pip6->daddr, sizeof(*na)+na_olen, IPPROTO_ICMPV6,
csum_partial(na, sizeof(*na)+na_olen, 0));
pip6->payload_len = htons(sizeof(*na)+na_olen);
skb_push(reply, sizeof(struct ipv6hdr));
6b5e: e8 dd 9c ff ff callq 840 <neigh_release>
na->target = ns->target;
ether_addr_copy(&na->opt[2], n->ha);
na->opt[0] = ND_OPT_TARGET_LL_ADDR;
na->opt[1] = na_olen >> 3;
na->icmph.icmp6_cksum = csum_ipv6_magic(&pip6->saddr,
6b63: 4c 89 e7 mov %r12,%rdi
6b66: e8 00 00 00 00 callq 6b6b <vxlan_xmit+0xc6b>
&pip6->daddr, sizeof(*na)+na_olen, IPPROTO_ICMPV6,
csum_partial(na, sizeof(*na)+na_olen, 0));
pip6->payload_len = htons(sizeof(*na)+na_olen);
6b6b: 83 e8 01 sub $0x1,%eax
6b6e: 0f 85 f3 fa ff ff jne 6667 <vxlan_xmit+0x767>
skb_push(reply, sizeof(struct ipv6hdr));
reply->ip_summed = CHECKSUM_UNNECESSARY;
6b74: e9 e6 fa ff ff jmpq 665f <vxlan_xmit+0x75f>
6b79: 0f 0b ud2
6b7b: 89 d2 mov %edx,%edx
}
reply = vxlan_na_create(skb, n,
!!(f ? f->flags & NTF_ROUTER : 0));
neigh_release(n);
6b7d: 44 89 4e 06 mov %r9d,0x6(%rsi)
pip6->payload_len = htons(sizeof(*na)+na_olen);
skb_push(reply, sizeof(struct ipv6hdr));
reply->ip_summed = CHECKSUM_UNNECESSARY;
6b81: 8b 4c 16 fc mov -0x4(%rsi,%rdx,1),%ecx
6b85: 41 89 4c 10 fc mov %ecx,-0x4(%r8,%rdx,1)
6b8a: 48 8b 8b d0 00 00 00 mov 0xd0(%rbx),%rcx
}
reply = vxlan_na_create(skb, n,
!!(f ? f->flags & NTF_ROUTER : 0));
neigh_release(n);
6b91: 0f b7 bb c6 00 00 00 movzwl 0xc6(%rbx),%edi
if (reply == NULL)
goto out;
if (netif_rx_ni(reply) == NET_RX_DROP)
6b98: 41 0f b6 96 75 02 00 movzbl 0x275(%r14),%edx
6b9f: 00
6ba0: e9 2b fc ff ff jmpq 67d0 <vxlan_xmit+0x8d0>
6ba5: 8b 3e mov (%rsi),%edi
6ba7: 89 d2 mov %edx,%edx
unsigned char *skb_pull(struct sk_buff *skb, unsigned int len);
static inline unsigned char *__skb_pull(struct sk_buff *skb, unsigned int len)
{
skb->len -= len;
BUG_ON(skb->len < skb->data_len);
6ba9: 89 39 mov %edi,(%rcx)
if (n) {
bool diff;
diff = !ether_addr_equal(eth_hdr(skb)->h_dest, n->ha);
if (diff) {
memcpy(eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest,
6bab: 8b 74 16 fc mov -0x4(%rsi,%rdx,1),%esi
6baf: 89 74 11 fc mov %esi,-0x4(%rcx,%rdx,1)
6bb3: e9 f0 f5 ff ff jmpq 61a8 <vxlan_xmit+0x2a8>
6bb8: 48 8b 4a e0 mov -0x20(%rdx),%rcx
6bbc: 48 0b 4a e8 or -0x18(%rdx),%rcx
6bc0: 0f 94 c2 sete %dl
6bc3: e9 3a f7 ff ff jmpq 6302 <vxlan_xmit+0x402>
6bc8: 31 c0 xor %eax,%eax
6bca: e9 42 f7 ff ff jmpq 6311 <vxlan_xmit+0x411>
6bcf: 89 d2 mov %edx,%edx
6bd1: 0f b7 4c 16 fe movzwl -0x2(%rsi,%rdx,1),%ecx
dev->addr_len);
memcpy(eth_hdr(skb)->h_dest, n->ha, dev->addr_len);
6bd6: 66 41 89 4c 10 fe mov %cx,-0x2(%r8,%rdx,1)
6bdc: 48 8b 8b d0 00 00 00 mov 0xd0(%rbx),%rcx
6be3: 0f b7 bb c6 00 00 00 movzwl 0xc6(%rbx),%edi
6bea: 41 0f b6 96 75 02 00 movzbl 0x275(%r14),%edx
6bf1: 00
6bf2: e9 d9 fb ff ff jmpq 67d0 <vxlan_xmit+0x8d0>
6bf7: 41 f6 86 d8 08 00 00 testb $0x10,0x8d8(%r14)
6bfe: 10
if (n) {
bool diff;
diff = !ether_addr_equal(eth_hdr(skb)->h_dest, n->ha);
if (diff) {
memcpy(eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest,
6bff: 0f 84 f2 fa ff ff je 66f7 <vxlan_xmit+0x7f7>
6c05: 48 8d 7c 24 30 lea 0x30(%rsp),%rdi
6c0a: b9 07 00 00 00 mov $0x7,%ecx
6c0f: 48 8d 74 24 30 lea 0x30(%rsp),%rsi
6c14: f3 ab rep stos %eax,%es:(%rdi)
6c16: 66 c7 44 24 30 0a 00 movw $0xa,0x30(%rsp)
6c1d: 4c 89 f7 mov %r14,%rdi
6c20: 49 8b 40 18 mov 0x18(%r8),%rax
6c24: 49 8b 50 20 mov 0x20(%r8),%rdx
if (!pskb_may_pull(skb, sizeof(struct ipv6hdr)))
return false;
pip6 = ipv6_hdr(skb);
n = neigh_lookup(ipv6_stub->nd_tbl, &pip6->daddr, dev);
if (!n && (vxlan->flags & VXLAN_F_L3MISS)) {
6c28: 48 89 44 24 38 mov %rax,0x38(%rsp)
6c2d: 48 89 54 24 40 mov %rdx,0x40(%rsp)
6c32: e8 19 b4 ff ff callq 2050 <vxlan_ip_miss>
union vxlan_addr ipa = {
6c37: e9 bb fa ff ff jmpq 66f7 <vxlan_xmit+0x7f7>
6c3c: 41 f6 86 d8 08 00 00 testb $0x10,0x8d8(%r14)
6c43: 10
6c44: 0f 84 1d fa ff ff je 6667 <vxlan_xmit+0x767>
6c4a: 48 8d 7c 24 30 lea 0x30(%rsp),%rdi
.sin6.sin6_addr = pip6->daddr,
.sin6.sin6_family = AF_INET6,
};
vxlan_ip_miss(dev, &ipa);
6c4f: 31 c0 xor %eax,%eax
if (!pskb_may_pull(skb, sizeof(struct ipv6hdr)))
return false;
pip6 = ipv6_hdr(skb);
n = neigh_lookup(ipv6_stub->nd_tbl, &pip6->daddr, dev);
if (!n && (vxlan->flags & VXLAN_F_L3MISS)) {
union vxlan_addr ipa = {
6c51: b9 07 00 00 00 mov $0x7,%ecx
6c56: 48 8d 74 24 30 lea 0x30(%rsp),%rsi
6c5b: f3 ab rep stos %eax,%es:(%rdi)
6c5d: 8b 44 24 2c mov 0x2c(%rsp),%eax
6c61: 4c 89 f7 mov %r14,%rdi
.sin6.sin6_addr = pip6->daddr,
.sin6.sin6_family = AF_INET6,
};
vxlan_ip_miss(dev, &ipa);
6c64: 66 c7 44 24 30 02 00 movw $0x2,0x30(%rsp)
6c6b: 89 44 24 34 mov %eax,0x34(%rsp)
reply->ip_summed = CHECKSUM_UNNECESSARY;
reply->pkt_type = PACKET_HOST;
if (netif_rx_ni(reply) == NET_RX_DROP)
dev->stats.rx_dropped++;
} else if (vxlan->flags & VXLAN_F_L3MISS) {
6c6f: e8 dc b3 ff ff callq 2050 <vxlan_ip_miss>
6c74: e9 ee f9 ff ff jmpq 6667 <vxlan_xmit+0x767>
6c79: 48 8b 8b d0 00 00 00 mov 0xd0(%rbx),%rcx
union vxlan_addr ipa = {
6c80: e9 23 f8 ff ff jmpq 64a8 <vxlan_xmit+0x5a8>
Disassembly of section .init.text:
0000000000000000 <init_module>:
0: 55 push %rbp
1: be 04 00 00 00 mov $0x4,%esi
6: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
d: 48 89 e5 mov %rsp,%rbp
10: 41 54 push %r12
12: 53 push %rbx
13: e8 00 00 00 00 callq 18 <init_module+0x18>
18: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
1f: e8 00 00 00 00 callq 24 <init_module+0x24>
24: 85 c0 test %eax,%eax
26: 41 89 c4 mov %eax,%r12d
29: 75 41 jne 6c <init_module+0x6c>
2b: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
}
/* Look up Ethernet address in forwarding table */
static struct vxlan_fdb *__vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
32: e8 00 00 00 00 callq 37 <init_module+0x37>
struct hlist_head *head = vxlan_fdb_head(vxlan, mac);
struct vxlan_fdb *f;
hlist_for_each_entry_rcu(f, head, hlist) {
37: 85 c0 test %eax,%eax
39: 89 c3 mov %eax,%ebx
3b: 75 21 jne 5e <init_module+0x5e>
3d: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
}
/* Look up Ethernet address in forwarding table */
static struct vxlan_fdb *__vxlan_find_mac(struct vxlan_dev *vxlan,
const u8 *mac)
{
44: e8 00 00 00 00 callq 49 <init_module+0x49>
struct hlist_head *head = vxlan_fdb_head(vxlan, mac);
struct vxlan_fdb *f;
hlist_for_each_entry_rcu(f, head, hlist) {
49: 89 c3 mov %eax,%ebx
4b: 44 89 e0 mov %r12d,%eax
4e: 85 db test %ebx,%ebx
50: 74 1a je 6c <init_module+0x6c>
52: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
59: e8 00 00 00 00 callq 5e <init_module+0x5e>
5e: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
65: e8 00 00 00 00 callq 6a <init_module+0x6a>
if (ether_addr_equal(mac, f->eth_addr))
6a: 89 d8 mov %ebx,%eax
6c: 5b pop %rbx
6d: 41 5c pop %r12
6f: 5d pop %rbp
70: c3 retq
Disassembly of section .exit.text:
0000000000000000 <cleanup_module>:
0: 55 push %rbp
1: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
8: 48 89 e5 mov %rsp,%rbp
b: e8 00 00 00 00 callq 10 <cleanup_module+0x10>
10: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
17: e8 00 00 00 00 callq 1c <cleanup_module+0x1c>
1c: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
23: e8 00 00 00 00 callq 28 <cleanup_module+0x28>
28: 5d pop %rbp
29: c3 retq
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/1] vxlan: insert ipv6 macro
2016-10-13 5:28 ` zhuyj
@ 2016-10-13 5:30 ` zhuyj
0 siblings, 0 replies; 6+ messages in thread
From: zhuyj @ 2016-10-13 5:30 UTC (permalink / raw)
To: Jiri Benc
Cc: netdev, pabeni, daniel, Pravin B Shelar, Alexander Duyck, hannes,
David S. Miller
Soon I will analyze the previous patch. I will let you know.
Thanks a lot.
On Thu, Oct 13, 2016 at 1:28 PM, zhuyj <zyjzyj2000@gmail.com> wrote:
> Hi, Jiri
>
> The dumped source code is in the attachment. Please check it. I think
> this file can explain all.
>
> If anything, please just let me know.
> Thanks a lot.
>
> On Wed, Oct 12, 2016 at 9:16 PM, Jiri Benc <jbenc@redhat.com> wrote:
>> On Wed, 12 Oct 2016 21:01:54 +0800, zhuyj wrote:
>>> How to explain the following source code? As you mentioned, are the
>>> #ifdefs in the following source pointless?
>>
>> They are not, the code would not compile without them. Look how struct
>> vxlan_dev is defined.
>>
>> Those are really basic questions you have. I suggest you try yourself
>> before asking such questions next time. In this case, you could
>> trivially remove the #ifdef and see for yourself, as I explained in the
>> previous email. Please do not try to offload your homework to other
>> people. It's very obvious you didn't even try to understand this, even
>> after the feedback you received.
>>
>> And do not top post.
>>
>> Thanks,
>>
>> Jiri
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2016-10-13 5:30 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-10-11 8:23 [PATCH 1/1] vxlan: insert ipv6 macro zyjzyj2000
2016-10-11 14:06 ` Jiri Benc
2016-10-12 13:01 ` zhuyj
2016-10-12 13:16 ` Jiri Benc
2016-10-13 5:28 ` zhuyj
2016-10-13 5:30 ` zhuyj
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).